No specifications were changed in the last 30 days.
How to navigate the specifications?
The “Module Specifications” section uses tags to dynamically render content based on the selected attributes, such as the IaC language, module classification, category, severity and more. The tags are defined in header of each specification page.
To make it easier for module owners and contributors to navigate the documentation, the specifications are grouped to distinct pages by the IaC language (Bicep | Terraform) and module classification ( resource | pattern | utility). The specifications on each page are further ordered by the category (e.g., Composition, CodeStyle, Testing, etc.), severity of the requirements (MUST | SHOULD | MAY) and at what stage of the module’s lifecycle the specification is typically applicable (Initial | BAU | EOL).
To find what you need, simply decide which IaC language you’d like develop in and what classification your module falls under, then navigate to the respective page to find the specifications that are relevant to you.
Info
All specifications have a 4-9 character long unique ID - a combination of letters and numbers. These letters only carry legacy meaning only leveraged by the AVM core team and are no longer used to group the specifications in any visible way. The ID is used to reference the specification in the code, documentation, and discussions.
Specification Tags
The following tags are used to qualify the specifications:
Each tag is a concatenation of exactly one of the keys and one of the values, e.g., Language-Bicep, Class-Resource, Type-Functional, etc. When it’s marked as Multiple, it means that the tag can have multiple values, e.g., Language-Bicep, Language-Terraform, or Persona-Owner, Persona-Contributor, etc. When it’s marked as Single, it means that the tag can have only one value, e.g., Type-Functional, Lifecycle-Initial, etc.
➕ Click here to see the definition of the Severity, Persona, Lifecycle and Validation tags...
Who is this specification for? The Owner is the module owner, while the Contributor is anyone who contributes to the module.
Lifecycle
When is this specification mostly relevant?
The Initial stage is when the module is being developed first - e.g., naming related specs are labeled with Lifecycle-Initial as the naming of the module only happens once: at the beginning of their life.
The BAU (business as usual) stage is at any time during the module’s typical lifecycle - e.g., specs that describe coding standards are relevant throughout the module’s life, for any time a new module version is released.
The EOL (end of life) stage is when the module is being decommissioned - e.g., specs describing how a module should be retired are labeled with Lifecycle-EOL.
Validation
How is this specification checked/validated/enforced?
Manual means that the specification is manually enforced at the time of the module review (at the time of the first or any subsequent module version release).
CI/Informational means that the module is checked against the specification by a CI pipeline, but the failure is only informational and doesn’t block the module release.
CI/Enforced means that the specification is automatically enforced by a CI pipeline, and the failure blocks the module release.
Note: the BCP/ or TF/ prefix is required as shared (language-agnostic) specifications may have different level of validation/enforcement per each language - e.g., it is possible that a specification is enforced by a CI pipeline for terraform modules, while it is manually enforced for Terraform modules.
Why are there language specific specifications?
While every effort is being made to standardize requirements and implementation details across all languages (and most specifications in fact, are applicable to all), it is expected that some of the specifications will be different between their respective languages to ensure we follow the best practices and leverage features of each language.
How to read the specifications?
Important
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.
As you’re developing/maintaining a module as a module owner or contributor, you need to ensure that your module adheres to the specifications outlined in this section. The specifications are designed to ensure that all AVM modules are consistent, secure, and compliant with best practices.
There are 3 levels of specifications:
MUST: These are mandatory requirements that MUST be followed.
SHOULD: These are recommended requirements that SHOULD be followed, unless there are good reasons for not to.
MAY: These are optional requirements that MAY be followed at the module owner’s/contributor’s discretion.
Subsections of Module Specifications
Terraform Specifications
Specifications by Category and Module Classification
Any updates to existing or new specifications for Terraform must be submitted as a draft for review by Azure Terraform PG/Engineering(@Azure/terraform-avm) and AVM core team(@Azure/avm-core-team).
Important
Provider Versatility: Users have the autonomy to choose between AzureRM, AzAPI, or a combination of both, tailored to the specific complexity of module requirements.
What changed recently?
No specifications were changed in the last 30 days.
Subsections of Terraform
Terraform Interfaces
This chapter details the interfaces/schemas for the AVM Resource Modules features/extension resources as referenced in RMFR4 and RMFR5.
Diagnostic Settings
Important
Allowed values for logs and metric categories or category groups MUST NOT be specified to keep the module implementation evergreen for any new categories or category groups added by RPs, without module owners having to update a list of allowed values and cut a new release of their module.
variable"diagnostic_settings" {
type = map(object({
name = optional(string, null)
log_categories = optional(set(string), [])
log_groups = optional(set(string), ["allLogs"])
metric_categories = optional(set(string), ["AllMetrics"])
log_analytics_destination_type = optional(string, "Dedicated")
workspace_resource_id = optional(string, null)
storage_account_resource_id = optional(string, null)
event_hub_authorization_rule_resource_id = optional(string, null)
event_hub_name = optional(string, null)
marketplace_partner_resource_id = optional(string, null)
}))
default = {}
nullable = falsevalidation {
condition = alltrue([for_, vin var.diagnostic_settings: contains(["Dedicated", "AzureDiagnostics"], v.log_analytics_destination_type)])
error_message = "Log analytics destination type must be one of: 'Dedicated', 'AzureDiagnostics'." }
validation {
condition = alltrue(
[
for_, vin var.diagnostic_settings:v.workspace_resource_id!=null||v.storage_account_resource_id!=null||v.event_hub_authorization_rule_resource_id!=null||v.marketplace_partner_resource_id!=null ]
)
error_message = "At least one of `workspace_resource_id`, `storage_account_resource_id`, `marketplace_partner_resource_id`, or `event_hub_authorization_rule_resource_id`, must be set." }
description = <<DESCRIPTION A map of diagnostic settings to create on the Key Vault. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `name` - (Optional) The name of the diagnostic setting. One will be generated if not set, however this will not be unique if you want to create multiple diagnostic setting resources.
- `log_categories` - (Optional) A set of log categories to send to the log analytics workspace. Defaults to `[]`.
- `log_groups` - (Optional) A set of log groups to send to the log analytics workspace. Defaults to `["allLogs"]`.
- `metric_categories` - (Optional) A set of metric categories to send to the log analytics workspace. Defaults to `["AllMetrics"]`.
- `log_analytics_destination_type` - (Optional) The destination type for the diagnostic setting. Possible values are `Dedicated` and `AzureDiagnostics`. Defaults to `Dedicated`.
- `workspace_resource_id` - (Optional) The resource ID of the log analytics workspace to send logs and metrics to.
- `storage_account_resource_id` - (Optional) The resource ID of the storage account to send logs and metrics to.
- `event_hub_authorization_rule_resource_id` - (Optional) The resource ID of the event hub authorization rule to send logs and metrics to.
- `event_hub_name` - (Optional) The name of the event hub. If none is specified, the default event hub will be selected.
- `marketplace_partner_resource_id` - (Optional) The full ARM resource ID of the Marketplace resource to which you would like to send Diagnostic LogsLogs.
DESCRIPTION } # Sample resource
resource"azurerm_monitor_diagnostic_setting""this" {
for_each = var.diagnostic_settingsname = each.value.name!=null? each.value.name:"diag-${var.name}"target_resource_id = azurerm_<MY_RESOURCE>.this.idstorage_account_id = each.value.storage_account_resource_ideventhub_authorization_rule_id = each.value.event_hub_authorization_rule_resource_ideventhub_name = each.value.event_hub_namepartner_solution_id = each.value.marketplace_partner_resource_idlog_analytics_workspace_id = each.value.workspace_resource_idlog_analytics_destination_type = each.value.log_analytics_destination_typedynamic"enabled_log" {
for_each = each.value.log_categoriescontent {
category = enabled_log.value }
}
dynamic"enabled_log" {
for_each = each.value.log_groupscontent {
category_group = enabled_log.value }
}
dynamic"enabled_metric" {
for_each = each.value.metric_categoriescontent {
category = enabled_metric.value }
}
}
In the provided example for Diagnostic Settings, both logs and metrics are enabled for the associated resource. However, it is IMPORTANT to note that certain resources may not support both diagnostic setting types/categories. In such cases, the resource configuration MUST be modified accordingly to ensure proper functionality and compliance with system requirements.
Role Assignments
variable"role_assignments" {
type = map(object({
role_definition_id_or_name = stringprincipal_id = stringdescription = optional(string, null)
skip_service_principal_aad_check = optional(bool, false)
condition = optional(string, null)
condition_version = optional(string, null)
delegated_managed_identity_resource_id = optional(string, null)
principal_type = optional(string, null)
}))
default = {}
nullable = falsedescription = <<DESCRIPTION A map of role assignments to create on the <RESOURCE>. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `role_definition_id_or_name` - The ID or name of the role definition to assign to the principal.
- `principal_id` - The ID of the principal to assign the role to.
- `description` - (Optional) The description of the role assignment.
- `skip_service_principal_aad_check` - (Optional) If set to true, skips the Azure Active Directory check for the service principal in the tenant. Defaults to false.
- `condition` - (Optional) The condition which will be used to scope the role assignment.
- `condition_version` - (Optional) The version of the condition syntax. Leave as `null` if you are not using a condition, if you are then valid values are '2.0'.
- `delegated_managed_identity_resource_id` - (Optional) The delegated Azure Resource Id which contains a Managed Identity. Changing this forces a new resource to be created. This field is only used in cross-tenant scenario.
- `principal_type` - (Optional) The type of the `principal_id`. Possible values are `User`, `Group` and `ServicePrincipal`. It is necessary to explicitly set this attribute when creating role assignments if the principal creating the assignment is constrained by ABAC rules that filters on the PrincipalType attribute.
> Note: only set `skip_service_principal_aad_check` to true if you are assigning a role to a service principal.
DESCRIPTION }
locals {
role_definition_resource_substring = "providers/Microsoft.Authorization/roleDefinitions" } # Example resource declaration
resource"azurerm_role_assignment""this" {
for_each = var.role_assignmentsscope = azurerm_MY_RESOURCE.this.idrole_definition_id = strcontains(lower(each.value.role_definition_id_or_name), lower(local.role_definition_resource_substring)) ? each.value.role_definition_id_or_name:nullrole_definition_name = strcontains(lower(each.value.role_definition_id_or_name), lower(local.role_definition_resource_substring)) ?null: each.value.role_definition_id_or_nameprincipal_id = each.value.principal_idcondition = each.value.conditioncondition_version = each.value.condition_versionskip_service_principal_aad_check = each.value.skip_service_principal_aad_checkdelegated_managed_identity_resource_id = each.value.delegated_managed_identity_resource_idprincipal_type = each.value.principal_type }
Details on child, extension and cross-referenced resources:
Modules MUST support Role Assignments on child, extension and cross-referenced resources as well as the primary resource via parameters/variables
Resource Locks
variable"lock" {
type = object({
kind = stringname = optional(string, null)
})
default = nulldescription = <<DESCRIPTION Controls the Resource Lock configuration for this resource. The following properties can be specified:
- `kind` - (Required) The type of lock. Possible values are `\"CanNotDelete\"` and `\"ReadOnly\"`.
- `name` - (Optional) The name of the lock. If not specified, a name will be generated based on the `kind` value. Changing this forces the creation of a new resource.
DESCRIPTIONvalidation {
condition = var.lock!=null? contains(["CanNotDelete", "ReadOnly"], var.lock.kind) :trueerror_message = "Lock kind must be either `\"CanNotDelete\"` or `\"ReadOnly\"`." }
} # Example resource implementation
resource"azurerm_management_lock""this" {
count = var.lock!=null?1:0lock_level = var.lock.kindname = coalesce(var.lock.name, "lock-${var.lock.kind}")
scope = azurerm_MY_RESOURCE.this.idnotes = var.lock.kind =="CanNotDelete"?"Cannot delete the resource or its child resources.":"Cannot delete or modify the resource or its child resources." }
lock = {
name = "lock-{resourcename}" # optional
type = "CanNotDelete" }
Details on child and extension resources:
Locks SHOULD be able to be set for child resources of the primary resource in resource modules
Details on cross-referenced resources:
Locks MUST be automatically applied to cross-referenced resources if the primary resource has a lock applied.
This MUST also be able to be turned off for each of the cross-referenced resources by the module consumer via a parameter/variable if they desire
An example of this is a Key Vault module that has a Private Endpoints enabled. If a lock is applied to the Key Vault via the lock parameter/variable then the lock should also be applied to the Private Endpoint automatically, unless the privateEndpointLock/private_endpoint_lock (example name) parameter/variable is set to None
Important
In Terraform, locks become part of the resource graph and suitable depends_on values should be set. Note that, during a destroy operation, Terraform will remove the locks before removing the resource itself, reducing the usefulness of the lock somewhat. Also note, due to eventual consistency in Azure, use of locks can cause destroy operations to fail as the lock may not have been fully removed by the time the destroy operation is executed.
Tags
variable"tags" {
type = map(string)
default = nulldescription = "(Optional) Tags of the resource." }
Details on child, extension and cross-referenced resources:
Tags MUST be automatically applied to child, extension and cross-referenced resources, if tags are applied to the primary resource.
By default, all tags set for the primary resource will automatically be passed down to child, extension and cross-referenced resources.
This MUST be able to be overridden by the module consumer so they can specify alternate tags for child, extension and cross-referenced resources, if they desire via a parameter/variable
If overridden by the module consumer, no merge/union of tags will take place from the primary resource and only the tags specified for the child, extension and cross-referenced resources will be applied
Managed Identities
variable"managed_identities" {
type = object({
system_assigned = optional(bool, false)
user_assigned_resource_ids = optional(set(string), [])
})
default = {}
nullable = falsedescription = <<DESCRIPTION Controls the Managed Identity configuration on this resource. The following properties can be specified:
- `system_assigned` - (Optional) Specifies if the System Assigned Managed Identity should be enabled.
- `user_assigned_resource_ids` - (Optional) Specifies a list of User Assigned Managed Identity resource IDs to be assigned to this resource.
DESCRIPTION } # Helper locals to make the dynamic block more readable
# There are three attributes here to cater for resources that
# support both user and system MIs, only system MIs, and only user MIs
locals {
managed_identities = {
system_assigned_user_assigned = (var.managed_identities.system_assigned|| length(var.managed_identities.user_assigned_resource_ids) >0) ? {
this = {
type = var.managed_identities.system_assigned&& length(var.managed_identities.user_assigned_resource_ids) >0?"SystemAssigned, UserAssigned": length(var.managed_identities.user_assigned_resource_ids) >0?"UserAssigned":"SystemAssigned"user_assigned_resource_ids = var.managed_identities.user_assigned_resource_ids }
} : {}
system_assigned = var.managed_identities.system_assigned? {
this = {
type = "SystemAssigned" }
} : {}
user_assigned = length(var.managed_identities.user_assigned_resource_ids) >0? {
this = {
type = "UserAssigned"user_assigned_resource_ids = var.managed_identities.user_assigned_resource_ids }
} : {}
}
} ## Resources supporting both SystemAssigned and UserAssigned
dynamic"identity" {
for_each = local.managed_identities.system_assigned_user_assignedcontent {
type = identity.value.typeidentity_ids = identity.value.user_assigned_resource_ids }
} ## Resources that only support SystemAssigned
dynamic"identity" {
for_each = identity.managed_identities.system_assignedcontent {
type = identity.value.type }
} ## Resources that only support UserAssigned
dynamic"identity" {
for_each = local.managed_identities.user_assignedcontent {
type = identity.value.typeidentity_ids = identity.value.user_assigned_resource_ids }
}
Reason for differences in User Assigned data type in languages:
We do not forsee the Managed Identity Resource Provider team to ever add additional properties within the empty object ({}) value required on the input of a User Assigned Managed Identity.
In Bicep we therefore have removed the need for this to be declared and just converted it to a simple array of Resource IDs
However, in Terraform we have left it as a object/map as this simplifies for_each and other loop mechanisms and provides more consistency in plan, apply, destroy operations
Especially when adding, removing or changing the order of the User Assigned Managed Identities as they are declared
Private Endpoints
# In this example we only support one service, e.g. Key Vault.
# If your service has multiple private endpoint services, then expose the service name.
# This variable is used to determine if the private_dns_zone_group block should be included,
# or if it is to be managed externally, e.g. using Azure Policy.
# https://github.com/Azure/terraform-azurerm-avm-res-keyvault-vault/issues/32
# Alternatively you can use AzAPI, which does not have this issue.
variable"private_endpoints_manage_dns_zone_group" {
type = booldefault = truenullable = falsedescription = "Whether to manage private DNS zone groups with this module. If set to false, you must manage private DNS zone groups externally, e.g. using Azure Policy." }
variable"private_endpoints" {
type = map(object({
name = optional(string, null)
role_assignments = optional(map(object({
role_definition_id_or_name = stringprincipal_id = stringdescription = optional(string, null)
skip_service_principal_aad_check = optional(bool, false)
condition = optional(string, null)
condition_version = optional(string, null)
delegated_managed_identity_resource_id = optional(string, null)
principal_type = optional(string, null)
})), {})
lock = optional(object({
kind = stringname = optional(string, null)
}), null)
tags = optional(map(string), null)
subnet_resource_id = stringsubresource_name = string # NOTE: `subresource_name` can be excluded if the resource does not support multiple sub resource types (e.g. storage account supports blob, queue, etc)
private_dns_zone_group_name = optional(string, "default")
private_dns_zone_resource_ids = optional(set(string), [])
application_security_group_associations = optional(map(string), {})
private_service_connection_name = optional(string, null)
network_interface_name = optional(string, null)
location = optional(string, null)
resource_group_name = optional(string, null)
ip_configurations = optional(map(object({
name = stringprivate_ip_address = string })), {})
}))
default = {}
nullable = falsedescription = <<DESCRIPTION A map of private endpoints to create on the Key Vault. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `name` - (Optional) The name of the private endpoint. One will be generated if not set.
- `role_assignments` - (Optional) A map of role assignments to create on the private endpoint. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time. See `var.role_assignments` for more information.
- `role_definition_id_or_name` - The ID or name of the role definition to assign to the principal.
- `principal_id` - The ID of the principal to assign the role to.
- `description` - (Optional) The description of the role assignment.
- `skip_service_principal_aad_check` - (Optional) If set to true, skips the Azure Active Directory check for the service principal in the tenant. Defaults to false.
- `condition` - (Optional) The condition which will be used to scope the role assignment.
- `condition_version` - (Optional) The version of the condition syntax. Leave as `null` if you are not using a condition, if you are then valid values are '2.0'.
- `delegated_managed_identity_resource_id` - (Optional) The delegated Azure Resource Id which contains a Managed Identity. Changing this forces a new resource to be created. This field is only used in cross-tenant scenario.
- `principal_type` - (Optional) The type of the `principal_id`. Possible values are `User`, `Group` and `ServicePrincipal`. It is necessary to explicitly set this attribute when creating role assignments if the principal creating the assignment is constrained by ABAC rules that filters on the PrincipalType attribute.
- `lock` - (Optional) The lock level to apply to the private endpoint. Default is `None`. Possible values are `None`, `CanNotDelete`, and `ReadOnly`.
- `kind` - (Required) The type of lock. Possible values are `\"CanNotDelete\"` and `\"ReadOnly\"`.
- `name` - (Optional) The name of the lock. If not specified, a name will be generated based on the `kind` value. Changing this forces the creation of a new resource.
- `tags` - (Optional) A mapping of tags to assign to the private endpoint.
- `subnet_resource_id` - The resource ID of the subnet to deploy the private endpoint in.
- `subresource_name` - The name of the sub resource for the private endpoint.
- `private_dns_zone_group_name` - (Optional) The name of the private DNS zone group. One will be generated if not set.
- `private_dns_zone_resource_ids` - (Optional) A set of resource IDs of private DNS zones to associate with the private endpoint. If not set, no zone groups will be created and the private endpoint will not be associated with any private DNS zones. DNS records must be managed external to this module.
- `application_security_group_resource_ids` - (Optional) A map of resource IDs of application security groups to associate with the private endpoint. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `private_service_connection_name` - (Optional) The name of the private service connection. One will be generated if not set.
- `network_interface_name` - (Optional) The name of the network interface. One will be generated if not set.
- `location` - (Optional) The Azure location where the resources will be deployed. Defaults to the location of the resource group.
- `resource_group_name` - (Optional) The resource group where the resources will be deployed. Defaults to the resource group of the Key Vault.
- `ip_configurations` - (Optional) A map of IP configurations to create on the private endpoint. If not specified the platform will create one. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `name` - The name of the IP configuration.
- `private_ip_address` - The private IP address of the IP configuration.
DESCRIPTION } # The PE resource when we are managing the private_dns_zone_group block:
resource"azurerm_private_endpoint""this" {
for_each = { fork, vin var.private_endpoints:k => vif var.private_endpoints_manage_dns_zone_group }
name = each.value.name!=null? each.value.name:"pep-${var.name}"location = each.value.location!=null? each.value.location: var.locationresource_group_name = each.value.resource_group_name!=null? each.value.resource_group_name: var.resource_group_namesubnet_id = each.value.subnet_resource_idcustom_network_interface_name = each.value.network_interface_nametags = each.value.tagsprivate_service_connection {
name = each.value.private_service_connection_name!=null? each.value.private_service_connection_name:"pse-${var.name}"private_connection_resource_id = azurerm_key_vault.this.idis_manual_connection = falsesubresource_names = ["MYSERVICE"] # map to each.value.subresource_name if there are multiple services.
}
dynamic"private_dns_zone_group" {
for_each = length(each.value.private_dns_zone_resource_ids) >0? ["this"] : []
content {
name = each.value.private_dns_zone_group_nameprivate_dns_zone_ids = each.value.private_dns_zone_resource_ids }
}
dynamic"ip_configuration" {
for_each = each.value.ip_configurationscontent {
name = ip_configuration.value.namesubresource_name = "MYSERVICE" # map to each.value.subresource_name if there are multiple services.
member_name = "MYSERVICE" # map to each.value.subresource_name if there are multiple services.
private_ip_address = ip_configuration.value.private_ip_address }
}
} # The PE resource when we are managing **not** the private_dns_zone_group block:
resource"azurerm_private_endpoint""this_unmanaged_dns_zone_groups" {
for_each = { fork, vin var.private_endpoints:k => vif!var.private_endpoints_manage_dns_zone_group } # ... repeat configuration above
# **omitting the private_dns_zone_group block**
# then add the following lifecycle block to ignore changes to the private_dns_zone_group block
lifecycle {
ignore_changes = [private_dns_zone_group]
}
} # Private endpoint application security group associations.
# We merge the nested maps from private endpoints and application security group associations into a single map.
locals {
private_endpoint_application_security_group_associations = { forassocin flatten([
forpe_k, pe_vin var.private_endpoints: [
forasg_k, asg_vinpe_v.application_security_group_associations: {
asg_key = asg_kpe_key = pe_kasg_resource_id = asg_v }
]
]) :"${assoc.pe_key}-${assoc.asg_key}" => assoc }
}
resource"azurerm_private_endpoint_application_security_group_association""this" {
for_each = local.private_endpoint_application_security_group_associationsprivate_endpoint_id = azurerm_private_endpoint.this[each.value.pe_key].idapplication_security_group_id = each.value.asg_resource_id } # You need an additional resource when not managing private_dns_zone_group with this module:
# In your output you need to select the correct resource based on the value of var.private_endpoints_manage_dns_zone_group:
output"private_endpoints" {
value = var.private_endpoints_manage_dns_zone_group?azurerm_private_endpoint.this:azurerm_private_endpoint.this_unmanaged_dns_zone_groupsdescription = <<DESCRIPTION A map of the private endpoints created.
DESCRIPTION }
The properties defined in the schema above are the minimum amount of properties expected to be exposed for Private Endpoints in AVM Resource Modules.
A module owner MAY chose to expose additional properties of the Private Endpoint resource.
However, module owners considering this SHOULD contact the AVM core team first to consult on how the property should be exposed to avoid future breaking changes to the schema that may be enforced upon them.
Module owners MAY chose to define a list of allowed value for the ‘service’ (a.k.a. groupIds) property.
However, they should do so with caution as should a new service appear for their resource module, a new release will need to be cut to add this new service to the allowed values.
Whereas not specifying allowed values will allow flexibility from day 0 without the need for any changes and releases to be made.
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for terraform modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of terraform modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://zojovano.github.io/azure-verified-modules-copy/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://zojovano.github.io/azure-verified-modules-copy/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
Module owners MUST set a branch protection policy on their GitHub Repositories for AVM modules against their default branch, typically main, to do the following:
Requires a Pull Request before merging
Require approval of the most recent reviewable push
Dismiss stale pull request approvals when new commits are pushed
Require linear history
Prevents force pushes
Not allow deletions
Require CODEOWNERS review
Do not allow bypassing the above settings
Above settings MUST also be enforced to administrators
Tip
If you use the template repository as mentioned in the contribution guide, the above will automatically be set.
Telemetry
The content below is listed based on the following tags
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoft’s privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
To enable telemetry data collection for Terraform modules, the modtm telemetry provider MUST be used. This lightweight telemetry provider sends telemetry data to Azure Application Insights via a HTTP POST front end service.
The modtm telemetry provider is included in all Terraform modules and is enabled by default through the main.telemetry.tf file being automatically distributed from the template repo.
The modtm provider MUST be listed under the required_providers section in the module’s terraform.tf file using the following entry. This is also validated by the linter.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
This general specification can be modified for some use-cases, that are language specific:
Bicep
For cross-references in resource modules, the spec BCPFR7 also applies.
Terraform
Currently, no further requirements apply.
Naming / Composition
The content below is listed based on the following tags
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Example: avm/ptn/compute/app-tier-vmss or avm/ptn/avd-lza/management-plane or avm/ptn/3-tier/web-app
Segments:
ptn defines this as a pattern module
<hyphenated grouping/category name> is a hierarchical grouping of pattern modules by category, with each word separated by dashes, such as:
project name, e.g., avd-lza,
primary resource provider, e.g., compute or network, or
architecture, e.g., 3-tier
<hyphenated pattern module name> is a term describing the module’s function, with each word separated by dashes, e.g., app-tier-vmss = Application Tier VMSS; management-plane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
Terraform Pattern Module Naming
Naming convention:
avm-ptn-<pattern module name> (Module name for registry)
terraform-<provider>-avm-ptn-<pattern module name> (GitHub repository name to meet registry naming requirements)
Example: avm-ptn-apptiervmss or avm-ptn-avd-lza-managementplane
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
ptn defines this as a pattern module
<pattern module name> is a term describing the module’s function, e.g., apptiervmss = Application Tier VMSS; avd-lza-managementplane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
ID: PMNFR2 - Category: Composition - Use Resource Modules to Build a Pattern Module
A Pattern Module SHOULD be built from AVM Resources Modules to establish a standardized code base and improve maintainability. If a valid reason exists, a pattern module MAY contain native resources (“vanilla” code) where it’s necessary. A Pattern Module MUST NOT contain references to non-AVM modules.
Valid reasons for not using a Resource Module for a resource required by a Pattern Module include but are not limited to:
When using a Resource Module would result in hitting scaling limitations and/or would reduce the capabilities of the Pattern Module due to the limitations of Azure Resource Manager.
Developing a Pattern Module under time constraint, without having all required Resource Modules readily available.
Note
In the latter case, the Pattern Module SHOULD be updated to use the Resource Module when the required Resource Module becomes available, to avoid accumulating technical debt. Ideally, all required Resource Modules SHOULD be developed first, and then leveraged by the Pattern Module.
Module owners MAY cross-references other modules to build either Resource or Pattern modules. However, they MUST be referenced only by a HashiCorp Terraform registry reference to a pinned version e.g.,
The module’s owners MUST use map(xxx) or set(xxx) as resource’s for_each collection, the map’s key or set’s element MUST be static literals.
Good example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
There are 3 types of assignment statements in a resource or data block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =:
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet has to be connected to at least one network_security_group. The user SHOULD pass in a security_group_id and ask us to make a connection to an existing security_group, or want us to create a new security group.
The disadvantage of this approach is if the user create a security group directly in the root module and use the id as a variable of the module, the expression which determines the value of count will contain an attribute from another resource, the value of this very attribute is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
For this kind of parameters, wrapping with object type is RECOMMENDED:
variable"security_group" {
type:object({
id = string })
default = null}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object itself can be easily found out if it’s null or not. Since the id of a resource cannot be null, this approach can avoid the situation we are facing in the first example, like the following:
variable used as feature switches SHOULD apply a positive statement, use xxx_enabled instead of xxx_disabled. Avoid double negatives like !xxx_disabled.
Please use xxx_enabled instead of xxx_disabled as name of a variable.
ID: TFNFR17 - Category: Code Style - Variables with Descriptions
The target audience of description is the module users.
For a newly created variable (Eg. variable for switching dynamic block on-off), it’s descriptionSHOULD precisely describe the input parameter’s purpose and the expected data type. descriptionSHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object type variable, description can be composed in HEREDOC format:
variable"kubernetes_cluster_key_management_service" {
type:object({
key_vault_key_id = stringkey_vault_network_access = optional(string)
})
default = nulldescription = <<-EOT - `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT}
ID: TFNFR19 - Category: Code Style - Sensitive Data Variables
If variable’s type is object and contains one or more fields that would be assigned to a sensitive argument, then this whole variableSHOULD be declared as sensitive = true, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true.
Nullable SHOULD be set to false for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
Sometimes we will find names for some variable are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable to an independent deprecated_variables.tf file, then redefine the new parameter in variable.tf and make sure it’s compatible everywhere else.
Deprecated variableMUST be annotated as DEPRECATED at the beginning of the description, at the same time the replacement’s name SHOULD be declared. E.g.,
variable"enable_network_security_group" {
type = stringdefault = nulldescription = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."}
A cleanup of deprecated_variables.tfSHOULD be performed during a major version release.
The terraform.tf file MUST only contain one terraform block.
The first line of the terraform block MUST define a required_version property for the Terraform CLI.
The required_version property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
ID: TFNFR26 - Category: Code Style - Providers in required_providers
The terraform block in terraform.tfMUST contain the required_providers block.
Each provider used directly in the module MUST be specified with the source and version properties. Providers in the required_providers block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf file.
The source property MUST be in the format of namespace/name. If this is not explicitly specified, it can cause failure.
The version property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
By rules, in the module code providerMUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider(Eg. manipulating resources across different locations or accounts), you MUST declare configuration_aliases in terraform.required_providers. See details in this document.
provider block declared in the module MUST only be used to differentiate instances used in resource and data. Declaration of fields other than alias in provider block is strictly forbidden. It could lead to module users unable to utilize count, for_each or depends_on. Configurations of the provider instance SHOULD be passed in by the module users.
Sometimes we notice that the name of certain output is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf file, then redefine a new output in output.tf and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf and other logics related to compatibility during a major version upgrade.
ID: TFNFR31 - Category: Code Style - locals.tf for Locals Only
In locals.tf, file we could declare multiple locals blocks, but only locals blocks are allowed.
You MAY declare locals blocks next to a resource block or data block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources in provider block is true. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some Azure Policy Remediation Tasks in the future, for a robust testing environment, prevent_deletion_if_contains_resourcesSHOULD be explicitly set to false.
newres is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf and main.tf files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres when they’re trying to add new resource block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable blocks in an empty folder, then copy-paste the parts they need with essential refactoring.
Inputs / Outputs
The content below is listed based on the following tags
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
Authors SHOULD NOT output entire resource objects as these may contain sensitive outputs and the schema can change with API or provider versions. Instead, authors SHOULD output the computed attributes of the resource as discreet outputs. This kind of pattern protects against provider schema changes and is known as an anti-corruption layer.
Remember, you SHOULD NOT output values that are already inputs (other than name).
E.g.,
# Resource output, computed attribute.
output"foo" {
description = "MyResource foo attribute"value = azurerm_resource_myresource.foo}# Resource output for resources that are deployed using `for_each`. Again only computed attributes.
output"childresource_foos" {
description = "MyResource children's foo attributes"value = {
forkey, valueinazurerm_resource_mychildresource:key => value.foo }
}# Output of a sensitive attribute
output"bar" {
description = "MyResource bar attribute"value = azurerm_resource_myresource.barsensitive = true}
ID: TFNFR14 - Category: Inputs - Not allowed variables
Since Terraform 0.13, count, for_each and depends_on are introduced for modules, module development is significantly simplified. Module’s owners MUST NOT add variables like enabled or module_depends_on to control the entire module’s operation. Boolean feature toggles are acceptable however.
Testing
The content below is listed based on the following tags
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
➕ Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
Deployment tests are an important part of a module’s validation and a staple of AVM’s CI environment. However, there are situations where certain e2e-test-deployments cannot be performed against AVM’s test environment (e.g., if a special configuration/registration (such as certain AI models) is required). For these cases, the CI offers the possibility to ‘skip’ specific test cases by placing a file named .e2eignore in their test folder.
Note
A skipped test case is still added to the ‘Usage Examples’ section of the module’s readme and should be manually validated in regular intervals.
Details for use in E2E tests
You MUST add a note to the tests metadata description, which explains the excemption.
If you require that a test is skipped and add an “.e2eignore” file (e.g. \<module\>/tests/e2e/\<testname\>/.e2eignore) to a pull request, a member of the AVM Core Technical Bicep Team must approve set pull request. The content of the file is logged the module’s workflow runs and transparently communicates why the test case is skipped during the deployment validation stage. It iss hence important to specify the reason for skipping the deployment in this file.
Sample filecontent:
The test is skipped, as only one instance of this service can be deployed to a subscription.
Note
For resource modules, the ‘defaults’ and ‘waf-aligned’ tests can’t be skipped.
The deployment of a test can be skipped by adding a .e2eignore file into a test folder (e.g. /examples/<testname>).
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
Where descriptions for variables and outputs spans multiple lines. The description MAY provide variable input examples for each variable using the HEREDOC format and embedded markdown.
Example:
variable"my_complex_input" {
type = map(object({
param1 = stringparam2 = optional(number, null)
}))
description = <<DESCRIPTION A complex input variable that is a map of objects.
Each object has two attributes:
- `param1`: A required string parameter.
- `param2`: (Optional) An optional number parameter.
Example Input:
```terraform
my_complex_input = {
"object1" = {
param1 = "value1"
param2 = 2
}
"object2" = {
param1 = "value2"
}
}
```
DESCRIPTION }
You cannot specify the patch version for terraform modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. 👍
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
Terraform Resource Module Specifications
Contribution / Support
The content below is listed based on the following tags
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for terraform modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of terraform modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://zojovano.github.io/azure-verified-modules-copy/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://zojovano.github.io/azure-verified-modules-copy/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
Module owners MUST set a branch protection policy on their GitHub Repositories for AVM modules against their default branch, typically main, to do the following:
Requires a Pull Request before merging
Require approval of the most recent reviewable push
Dismiss stale pull request approvals when new commits are pushed
Require linear history
Prevents force pushes
Not allow deletions
Require CODEOWNERS review
Do not allow bypassing the above settings
Above settings MUST also be enforced to administrators
Tip
If you use the template repository as mentioned in the contribution guide, the above will automatically be set.
Telemetry
The content below is listed based on the following tags
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoft’s privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
To enable telemetry data collection for Terraform modules, the modtm telemetry provider MUST be used. This lightweight telemetry provider sends telemetry data to Azure Application Insights via a HTTP POST front end service.
The modtm telemetry provider is included in all Terraform modules and is enabled by default through the main.telemetry.tf file being automatically distributed from the template repo.
The modtm provider MUST be listed under the required_providers section in the module’s terraform.tf file using the following entry. This is also validated by the linter.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
This general specification can be modified for some use-cases, that are language specific:
Bicep
For cross-references in resource modules, the spec BCPFR7 also applies.
Terraform
Currently, no further requirements apply.
Naming / Composition
The content below is listed based on the following tags
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Resource modules support the following optional features/extension resources, as specified, if supported by the primary resource. The top-level variable/parameter names MUST be:
Optional Features/Extension Resources
Bicep Parameter Name
Terraform Variable Name
MUST/SHOULD
Diagnostic Settings
diagnosticSettings
diagnostic_settings
MUST
Role Assignments
roleAssignments
role_assignments
MUST
Resource Locks
lock
lock
MUST
Tags
tags
tags
MUST
Managed Identities (System / User Assigned)
managedIdentities
managed_identities
MUST
Private Endpoints
privateEndpoints
private_endpoints
MUST
Customer Managed Keys
customerManagedKey
customer_managed_key
MUST
Azure Monitor Alerts
alerts
alerts
SHOULD
Resource modules MUST NOT deploy required/dependent resources for the optional features/extension resources specified above. For example, for Diagnostic Settings the resource module MUST NOT deploy the Log Analytics Workspace, this is expected to be already in existence from the perspective of the resource module deployed via another method/module etc.
Note
Please note that the implementation of Customer Managed Keys from an ARM API perspective is different across various RPs that implement Customer Managed Keys in their service. For that reason you may see differences between modules on how Customer Managed Keys are handled and implemented, but functionality will be as expected.
Module owners MAY choose to utilize cross repo dependencies for these “add-on” resources, or MAY chose to implement the code directly in their own repo/module. So long as the implementation and outputs are as per the specifications requirements, then this is acceptable.
Tip
Make sure to checkout the language specific specifications for more info on this:
Resource modules MUST implement a common interface, e.g. the input’s data structures and properties within them (objects/arrays/dictionaries/maps), for the optional features/extension resources:
When a given version of an Azure resource used in a resource module reaches its end-of-life (EOL) and is no longer supported by Microsoft, the module owner SHOULD ensure that:
The module is aligned with these changes and only includes supported versions of the resource. This is typically achieved through the allowed values in the parameter that specifies the resource SKU or type.
The following notice is shown under the Notes section of the module’s readme.md. (If any related public announcement is available, it can also be linked to from the Notes section.):
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from the related parameters.”
AND the related parameter’s description:
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from this parameter.”
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the correct singular names for all resource types to enable checks to utilize this list to ensure repos are named correctly. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
This will be updated quarterly, or ad-hoc as new RPs/ Resources are created and highlighted via a check failure.
Resource modules MUST follow the below naming conventions (all lower case):
Bicep Resource Module Naming
Naming convention (module name for registry): avm/res/<hyphenated resource provider name>/<hyphenated ARM resource type>
Example: avm/res/compute/virtual-machine or avm/res/managed-identity/user-assigned-identity
Segments:
res defines this is a resource module
<hyphenated resource provider name> is the resource provider’s name after the Microsoft part, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute = compute, Microsoft.ManagedIdentity = managed-identity.
<hyphenated ARM resource type> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute/virtualMachines = virtual-machine, BUTMicrosoft.Network/trafficmanagerprofiles = trafficmanagerprofile - since trafficmanagerprofiles is all lower case as per the ARM API definition.
Bicep Child Module Naming
Naming convention (module name for registry):avm/res/<hyphenated resource provider name>/<hyphenated ARM resource type>/<hyphenated child resource type/<hyphenated grandchild resource type>/<etc.>
Example: avm/res/network/virtual-network/subnet or avm/res/storage/storage-account/blob-service/container
Segments:
res defines this is a resource module
<hyphenated resource provider name> is the resource provider’s name after the Microsoft part, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Network = network.
<hyphenated ARM resource type> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Network/virtualNetworks = virtual-network.
<hyphenated child resource type (to be repeated for grandchildren, etc.)> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Network/virtualNetworks/subnets = subnet or Microsoft.Storage/storageAccounts/blobServices/containers = blob-service/container.
Terraform Resource Module Naming
Naming convention:
avm-res-<resource provider>-<ARM resource type> (module name for registry)
terraform-<provider>-avm-res-<resource provider>-<ARM resource type> (GitHub repository name to meet registry naming requirements)
Example: avm-res-compute-virtualmachine or avm-res-managedidentity-userassignedidentity
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
res defines this is a resource module
<resource provider> is the resource provider’s name after the Microsoft part, e.g., Microsoft.Compute = compute.
<ARM resource type> is the singular version of the word after the resource provider, e.g., Microsoft.Compute/virtualMachines = virtualmachine
ID: RMNFR3 - Category: Composition - RP Collaboration
Module owners (Microsoft FTEs) SHOULD reach out to the respective Resource Provider teams to build a partnership and collaboration on the modules creation, existence and long term maintenance.
Module owners MAY cross-references other modules to build either Resource or Pattern modules. However, they MUST be referenced only by a HashiCorp Terraform registry reference to a pinned version e.g.,
The module’s owners MUST use map(xxx) or set(xxx) as resource’s for_each collection, the map’s key or set’s element MUST be static literals.
Good example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
There are 3 types of assignment statements in a resource or data block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =:
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet has to be connected to at least one network_security_group. The user SHOULD pass in a security_group_id and ask us to make a connection to an existing security_group, or want us to create a new security group.
The disadvantage of this approach is if the user create a security group directly in the root module and use the id as a variable of the module, the expression which determines the value of count will contain an attribute from another resource, the value of this very attribute is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
For this kind of parameters, wrapping with object type is RECOMMENDED:
variable"security_group" {
type:object({
id = string })
default = null}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object itself can be easily found out if it’s null or not. Since the id of a resource cannot be null, this approach can avoid the situation we are facing in the first example, like the following:
variable used as feature switches SHOULD apply a positive statement, use xxx_enabled instead of xxx_disabled. Avoid double negatives like !xxx_disabled.
Please use xxx_enabled instead of xxx_disabled as name of a variable.
ID: TFNFR17 - Category: Code Style - Variables with Descriptions
The target audience of description is the module users.
For a newly created variable (Eg. variable for switching dynamic block on-off), it’s descriptionSHOULD precisely describe the input parameter’s purpose and the expected data type. descriptionSHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object type variable, description can be composed in HEREDOC format:
variable"kubernetes_cluster_key_management_service" {
type:object({
key_vault_key_id = stringkey_vault_network_access = optional(string)
})
default = nulldescription = <<-EOT - `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT}
ID: TFNFR19 - Category: Code Style - Sensitive Data Variables
If variable’s type is object and contains one or more fields that would be assigned to a sensitive argument, then this whole variableSHOULD be declared as sensitive = true, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true.
Nullable SHOULD be set to false for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
Sometimes we will find names for some variable are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable to an independent deprecated_variables.tf file, then redefine the new parameter in variable.tf and make sure it’s compatible everywhere else.
Deprecated variableMUST be annotated as DEPRECATED at the beginning of the description, at the same time the replacement’s name SHOULD be declared. E.g.,
variable"enable_network_security_group" {
type = stringdefault = nulldescription = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."}
A cleanup of deprecated_variables.tfSHOULD be performed during a major version release.
The terraform.tf file MUST only contain one terraform block.
The first line of the terraform block MUST define a required_version property for the Terraform CLI.
The required_version property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
ID: TFNFR26 - Category: Code Style - Providers in required_providers
The terraform block in terraform.tfMUST contain the required_providers block.
Each provider used directly in the module MUST be specified with the source and version properties. Providers in the required_providers block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf file.
The source property MUST be in the format of namespace/name. If this is not explicitly specified, it can cause failure.
The version property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
By rules, in the module code providerMUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider(Eg. manipulating resources across different locations or accounts), you MUST declare configuration_aliases in terraform.required_providers. See details in this document.
provider block declared in the module MUST only be used to differentiate instances used in resource and data. Declaration of fields other than alias in provider block is strictly forbidden. It could lead to module users unable to utilize count, for_each or depends_on. Configurations of the provider instance SHOULD be passed in by the module users.
Sometimes we notice that the name of certain output is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf file, then redefine a new output in output.tf and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf and other logics related to compatibility during a major version upgrade.
ID: TFNFR31 - Category: Code Style - locals.tf for Locals Only
In locals.tf, file we could declare multiple locals blocks, but only locals blocks are allowed.
You MAY declare locals blocks next to a resource block or data block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources in provider block is true. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some Azure Policy Remediation Tasks in the future, for a robust testing environment, prevent_deletion_if_contains_resourcesSHOULD be explicitly set to false.
newres is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf and main.tf files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres when they’re trying to add new resource block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable blocks in an empty folder, then copy-paste the parts they need with essential refactoring.
Inputs / Outputs
The content below is listed based on the following tags
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
Parameters/variables that pertain to the primary resource MUST NOT use the resource type in the name.
e.g., use sku, vs. virtualMachineSku/virtualmachine_sku
Another example for where RPs contain some of their name within a property, leave the property unchanged. E.g. Key Vault has a property called keySize, it is fine to leave as this and not remove the key part from the property/parameter name.
A resource module MUST use the following standard inputs:
name (no default)
location (if supported by the resource and not a global resource, then use Resource Group location, if resource supports Resource Groups, otherwise no default)
Authors SHOULD NOT output entire resource objects as these may contain sensitive outputs and the schema can change with API or provider versions. Instead, authors SHOULD output the computed attributes of the resource as discreet outputs. This kind of pattern protects against provider schema changes and is known as an anti-corruption layer.
Remember, you SHOULD NOT output values that are already inputs (other than name).
E.g.,
# Resource output, computed attribute.
output"foo" {
description = "MyResource foo attribute"value = azurerm_resource_myresource.foo}# Resource output for resources that are deployed using `for_each`. Again only computed attributes.
output"childresource_foos" {
description = "MyResource children's foo attributes"value = {
forkey, valueinazurerm_resource_mychildresource:key => value.foo }
}# Output of a sensitive attribute
output"bar" {
description = "MyResource bar attribute"value = azurerm_resource_myresource.barsensitive = true}
ID: TFNFR14 - Category: Inputs - Not allowed variables
Since Terraform 0.13, count, for_each and depends_on are introduced for modules, module development is significantly simplified. Module’s owners MUST NOT add variables like enabled or module_depends_on to control the entire module’s operation. Boolean feature toggles are acceptable however.
Testing
The content below is listed based on the following tags
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
➕ Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
Deployment tests are an important part of a module’s validation and a staple of AVM’s CI environment. However, there are situations where certain e2e-test-deployments cannot be performed against AVM’s test environment (e.g., if a special configuration/registration (such as certain AI models) is required). For these cases, the CI offers the possibility to ‘skip’ specific test cases by placing a file named .e2eignore in their test folder.
Note
A skipped test case is still added to the ‘Usage Examples’ section of the module’s readme and should be manually validated in regular intervals.
Details for use in E2E tests
You MUST add a note to the tests metadata description, which explains the excemption.
If you require that a test is skipped and add an “.e2eignore” file (e.g. \<module\>/tests/e2e/\<testname\>/.e2eignore) to a pull request, a member of the AVM Core Technical Bicep Team must approve set pull request. The content of the file is logged the module’s workflow runs and transparently communicates why the test case is skipped during the deployment validation stage. It iss hence important to specify the reason for skipping the deployment in this file.
Sample filecontent:
The test is skipped, as only one instance of this service can be deployed to a subscription.
Note
For resource modules, the ‘defaults’ and ‘waf-aligned’ tests can’t be skipped.
The deployment of a test can be skipped by adding a .e2eignore file into a test folder (e.g. /examples/<testname>).
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
Where descriptions for variables and outputs spans multiple lines. The description MAY provide variable input examples for each variable using the HEREDOC format and embedded markdown.
Example:
variable"my_complex_input" {
type = map(object({
param1 = stringparam2 = optional(number, null)
}))
description = <<DESCRIPTION A complex input variable that is a map of objects.
Each object has two attributes:
- `param1`: A required string parameter.
- `param2`: (Optional) An optional number parameter.
Example Input:
```terraform
my_complex_input = {
"object1" = {
param1 = "value1"
param2 = 2
}
"object2" = {
param1 = "value2"
}
}
```
DESCRIPTION }
You cannot specify the patch version for terraform modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. 👍
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
You cannot specify the patch version for terraform modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. 👍
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
Module Classifications
Module Classification Definitions
AVM defines two module classifications, Resource Modules and Pattern Modules, that can be created, published, and consumed, these are defined further in the table below:
Module Class
Definition
Who is it for?
Resource Module
Deploys a primary resource with WAF high priority/impact best practice configurations set by default, e.g., availability zones, firewall, enforced Entra ID authentication and other shared interfaces, e.g., RBAC, Locks, Private Endpoints etc. (if supported). See What does AVM mean by “WAF Aligned”?
They MAY include related resources, e.g. VM contains disk & NIC. Focus should be on customer experience. A customer would expect that a VM module would include all required resources to provision a VM.
Furthermore, Resource Modules MUST NOT deploy external dependencies for the primary resource. E.g. a VM needs a vNet and Subnet to be deployed into, but the vNet will not be created by the VM Resource Module.
Finally, a resource can be anything such as Microsoft Defender for Cloud Pricing Plans, these are still resources in ARM and can therefore be created as a Resource Module.
People who want to craft bespoke architectures that default to WAF best practices, where appropriate, for each resource.
People who want to create pattern modules.
Pattern Module
Deploys multiple resources, usually using Resource Modules. They can be any size but should help accelerate a common task/deployment/architecture.
Good candidates for pattern modules are those architectures that exist in Azure Architecture Center, or other official documentation.
Note: Pattern modules can contain other pattern modules, however, pattern modules MUST NOT contain references to non-AVM modules.
People who want to easily deploy patterns (architectures) using WAF best practices.
Utility Module (draft, see below)
Implements a function or routine that can be flexibly reused in resource or pattern modules - e.g., a function that retrieves the endpoint of an API or portal of a given environment.
It MUST NOT deploy any Azure resources other than deployment scripts.
People who want to leverage commonly used functions/routines/helpers in their module, instead of re-implementing them locally.
PREVIEW
The concept of Utility Modules will be introduced gradually, through some initial examples. The definition above is subject to change as additional details are worked out.
The required automated tests and other workflow elements will be derived from the Pattern Modules’ automation/CI environment as the concept matures.
Utility modules will follow the below naming convention:
Bicep: avm/utl/<hyphenated grouping/category name>/<hyphenated utility module name>. Modules will be kept under the avm/utl folder in the BRM repository.
Terraform: avm-utl-<utility-module-name>. Repositories will be named after the utility module (e.g., terraform-azurerm-avm-utl-<my utility module>).
All related documentation (functional and non-functional requirements, etc.) will also be published along the way.
Module Lifecycle
This section outlines the different stages of a module’s lifecycle:
flowchart LR
Proposed["1 - Proposed ⚪"] --> |Acceptance criteria met ✅| Available["2 - Available 🟢"]
click Proposed "/azure-verified-modules-copy/specs/shared/module-lifecycle/#1-proposed-modules"
click Available "/azure-verified-modules-copy/specs/shared/module-lifecycle/#2-available-modules"
Proposed --> |Acceptance criteria not met ❌| Rejected[Rejected]
Available --> |Module temporarily not maintained| Orphaned["3 - Orphaned 🟡"]
Orphaned --> |End of life| Deprecated["4 - Deprecated 🔴"]
click Orphaned "/azure-verified-modules-copy/specs/shared/module-lifecycle/#3-orphaned-modules"
Orphaned --> |New owner identified| Available
Available --> |End of life| Deprecated
click Deprecated "/azure-verified-modules-copy/specs/shared/module-lifecycle/#4-deprecated-modules"
style Proposed fill:#ADD8E6,stroke:#333,stroke-width:1px
style Orphaned fill:#F4A460,stroke:#333,stroke-width:1px
style Available fill:#8DE971,stroke:#333,stroke-width:4px
style Deprecated fill:#000000,stroke:#333,stroke-width:1px,color:#fff
style Rejected fill:#A2A2A2,stroke:#333,stroke-width:1px
Important
If a module proposal is rejected, the issue is closed and the module’s lifecycle ends.
1. Proposed Modules
A module can be proposed through the module proposal process. The module proposal process is outlined in the Process Overview section.
To propose/request a new AVM resource, pattern or utility module, submit a module proposal issue in the AVM repository.
The proposal should include the following information:
module name
language (Bicep, Terraform, etc.)
module class (resource, pattern, utility)
module description
module owner(s) - if known
The AVM core team will review the proposal, and administrate the module.
Info
To propose a new module, submit a module proposal issue in the AVM repository.
2. Available modules
Once a module has been fully developed, tested and published in the main branch of the repository and the corresponding public registry (Bicep or Terraform), it is then considered to be “available” and can be used by the community. The module is maintained by the module owner(s). Feature or bug fix requests and related pull requests can be submitted by anyone to the module owner(s) for review.
3. Orphaned Modules
It is critical to the consumers experience that modules continue to be maintained. In the case where a module owner cannot continue in their role or do not respond to issues as per the defined timescale in the Module Support page , the following process will apply:
The module owner is responsible for finding a replacement owner and providing a handover.
If no replacement can be found or the module owner leaves Microsoft without giving warning to the AVM core team, the AVM core team will provide essential maintenance (critical bug and security fixes), as per the Module Support page
The AVM core team will continue to try and re-assign the module ownership.
While a module is in an orphaned state, only security and bug fixes MUST be made, no new feature development will be worked on until a new owner is found that can then lead this effort for the module.
An issue will be created on the central AVM repo (zojovano/azure-verified-modules-copy) to track the finding of a new owner for a module.
When a module becomes orphaned, the AVM core team will communicate this through an information notice to be placed as follows.
In case of a Bicep module, the information notice will be placed in an ORPHANED.md file and in the header of the module’s README.md - both residing in the module’s root.
In case of a Terraform module, the information notice will be placed in the header of the README.md file, in the module’s root.
The information notice will include the following statement:
⚠️THIS MODULE IS CURRENTLY ORPHANED.⚠️
- Only security and bug fixes are being handled by the AVM core team at present.
- If interested in becoming the module owner of this orphaned module (must be Microsoft FTE), please look for the related "orphaned module" GitHub issue [here](https://aka.ms/AVM/OrphanedModules)!
Also, the AVM core team will amend the issue automation to auto reply stating that the repo is orphaned and only security/bug fixes are being handled until a new module owner is found.
4. Deprecated Modules
Once a module reaches the end of its lifecycle (e.g., it’s permanently replaced by another module; permanent retirement due to obsolete technology/solution), it needs to be deprecated. A deprecated module will no longer be maintained, and no new features or bug fixes will be implemented for it. The module will indefinitely stay available in the public registry and source code repository for use, but certain measures will take place, such as:
The module will show as deprecated in the AVM module index.
The module will no longer be shown through VS Code IntelliSense.
The module’s source code will be kept in its repository but it will show a deprecated status through a DEPRECATED.md file (Bicep only) and a disclaimer in the module’s README.md file.
It will be a clearly indicated on the module’s repo that new issues can no longer be submitted for the module:
Bicep: The module will be taken off the list of available modules in related issue templates.
Terraform: The module’s repo will be archived.
It is recommended to migrate to a replacement/alternative version of the module, if available.
Important
When a module becomes deprecated, the AVM core team will communicate this through an information notice to be placed as follows.
In case of a Bicep module, the information notice will be placed in a DEPRECATED.md file and in the header of the module’s README.md - both residing in the module’s root.
In case of a Terraform module, the information notice will be placed in the header of the README.md file, in the module’s root.
The information notice MUST include the following statement:
⚠️THIS MODULE IS DEPRECATED.⚠️
- It will no longer receive any updates.
- The module can still be used as is (references to any existing versions will keep working), but it is not recommended for new deployments.
- It is recommended to migrate to a replacement/alternative version of the module, if available.
➕ Retrieve the available versions of a deprecated module
To find all previous versions of a Bicep module, the following steps need to be performed (assuming the avm/ptn/finops-toolkit/finops-hub module has been deprecated):
To find out the all the versions the module has ever been published under, perform one of these steps:
navigate to Bicep Public Registry’s JSON index and look for the module’s name,
OR clone the Bicep Public Registry repository and run the following command in the root of the repository: git tag -l 'avm/ptn/finops-toolkit/finops-hub/*'. This will list all the tags that match the module’s name.
Identify the available versions of the module, e.g., 0.1.0, 0.1.1, etc.
Example: avm/ptn/compute/app-tier-vmss or avm/ptn/avd-lza/management-plane or avm/ptn/3-tier/web-app
Segments:
ptn defines this as a pattern module
<hyphenated grouping/category name> is a hierarchical grouping of pattern modules by category, with each word separated by dashes, such as:
project name, e.g., avd-lza,
primary resource provider, e.g., compute or network, or
architecture, e.g., 3-tier
<hyphenated pattern module name> is a term describing the module’s function, with each word separated by dashes, e.g., app-tier-vmss = Application Tier VMSS; management-plane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
Terraform Pattern Module Naming
Naming convention:
avm-ptn-<pattern module name> (Module name for registry)
terraform-<provider>-avm-ptn-<pattern module name> (GitHub repository name to meet registry naming requirements)
Example: avm-ptn-apptiervmss or avm-ptn-avd-lza-managementplane
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
ptn defines this as a pattern module
<pattern module name> is a term describing the module’s function, e.g., apptiervmss = Application Tier VMSS; avd-lza-managementplane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
PMNFR2 - Use Resource Modules to Build a Pattern Module
ID: PMNFR2 - Category: Composition - Use Resource Modules to Build a Pattern Module
A Pattern Module SHOULD be built from AVM Resources Modules to establish a standardized code base and improve maintainability. If a valid reason exists, a pattern module MAY contain native resources (“vanilla” code) where it’s necessary. A Pattern Module MUST NOT contain references to non-AVM modules.
Valid reasons for not using a Resource Module for a resource required by a Pattern Module include but are not limited to:
When using a Resource Module would result in hitting scaling limitations and/or would reduce the capabilities of the Pattern Module due to the limitations of Azure Resource Manager.
Developing a Pattern Module under time constraint, without having all required Resource Modules readily available.
Note
In the latter case, the Pattern Module SHOULD be updated to use the Resource Module when the required Resource Module becomes available, to avoid accumulating technical debt. Ideally, all required Resource Modules SHOULD be developed first, and then leveraged by the Pattern Module.
Resource modules support the following optional features/extension resources, as specified, if supported by the primary resource. The top-level variable/parameter names MUST be:
Optional Features/Extension Resources
Bicep Parameter Name
Terraform Variable Name
MUST/SHOULD
Diagnostic Settings
diagnosticSettings
diagnostic_settings
MUST
Role Assignments
roleAssignments
role_assignments
MUST
Resource Locks
lock
lock
MUST
Tags
tags
tags
MUST
Managed Identities (System / User Assigned)
managedIdentities
managed_identities
MUST
Private Endpoints
privateEndpoints
private_endpoints
MUST
Customer Managed Keys
customerManagedKey
customer_managed_key
MUST
Azure Monitor Alerts
alerts
alerts
SHOULD
Resource modules MUST NOT deploy required/dependent resources for the optional features/extension resources specified above. For example, for Diagnostic Settings the resource module MUST NOT deploy the Log Analytics Workspace, this is expected to be already in existence from the perspective of the resource module deployed via another method/module etc.
Note
Please note that the implementation of Customer Managed Keys from an ARM API perspective is different across various RPs that implement Customer Managed Keys in their service. For that reason you may see differences between modules on how Customer Managed Keys are handled and implemented, but functionality will be as expected.
Module owners MAY choose to utilize cross repo dependencies for these “add-on” resources, or MAY chose to implement the code directly in their own repo/module. So long as the implementation and outputs are as per the specifications requirements, then this is acceptable.
Tip
Make sure to checkout the language specific specifications for more info on this:
Resource modules MUST implement a common interface, e.g. the input’s data structures and properties within them (objects/arrays/dictionaries/maps), for the optional features/extension resources:
Parameters/variables that pertain to the primary resource MUST NOT use the resource type in the name.
e.g., use sku, vs. virtualMachineSku/virtualmachine_sku
Another example for where RPs contain some of their name within a property, leave the property unchanged. E.g. Key Vault has a property called keySize, it is fine to leave as this and not remove the key part from the property/parameter name.
When a given version of an Azure resource used in a resource module reaches its end-of-life (EOL) and is no longer supported by Microsoft, the module owner SHOULD ensure that:
The module is aligned with these changes and only includes supported versions of the resource. This is typically achieved through the allowed values in the parameter that specifies the resource SKU or type.
The following notice is shown under the Notes section of the module’s readme.md. (If any related public announcement is available, it can also be linked to from the Notes section.):
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from the related parameters.”
AND the related parameter’s description:
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from this parameter.”
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the correct singular names for all resource types to enable checks to utilize this list to ensure repos are named correctly. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
This will be updated quarterly, or ad-hoc as new RPs/ Resources are created and highlighted via a check failure.
Resource modules MUST follow the below naming conventions (all lower case):
Bicep Resource Module Naming
Naming convention (module name for registry): avm/res/<hyphenated resource provider name>/<hyphenated ARM resource type>
Example: avm/res/compute/virtual-machine or avm/res/managed-identity/user-assigned-identity
Segments:
res defines this is a resource module
<hyphenated resource provider name> is the resource provider’s name after the Microsoft part, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute = compute, Microsoft.ManagedIdentity = managed-identity.
<hyphenated ARM resource type> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute/virtualMachines = virtual-machine, BUTMicrosoft.Network/trafficmanagerprofiles = trafficmanagerprofile - since trafficmanagerprofiles is all lower case as per the ARM API definition.
Bicep Child Module Naming
Naming convention (module name for registry):avm/res/<hyphenated resource provider name>/<hyphenated ARM resource type>/<hyphenated child resource type/<hyphenated grandchild resource type>/<etc.>
Example: avm/res/network/virtual-network/subnet or avm/res/storage/storage-account/blob-service/container
Segments:
res defines this is a resource module
<hyphenated resource provider name> is the resource provider’s name after the Microsoft part, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Network = network.
<hyphenated ARM resource type> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Network/virtualNetworks = virtual-network.
<hyphenated child resource type (to be repeated for grandchildren, etc.)> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Network/virtualNetworks/subnets = subnet or Microsoft.Storage/storageAccounts/blobServices/containers = blob-service/container.
Terraform Resource Module Naming
Naming convention:
avm-res-<resource provider>-<ARM resource type> (module name for registry)
terraform-<provider>-avm-res-<resource provider>-<ARM resource type> (GitHub repository name to meet registry naming requirements)
Example: avm-res-compute-virtualmachine or avm-res-managedidentity-userassignedidentity
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
res defines this is a resource module
<resource provider> is the resource provider’s name after the Microsoft part, e.g., Microsoft.Compute = compute.
<ARM resource type> is the singular version of the word after the resource provider, e.g., Microsoft.Compute/virtualMachines = virtualmachine
A resource module MUST use the following standard inputs:
name (no default)
location (if supported by the resource and not a global resource, then use Resource Group location, if resource supports Resource Groups, otherwise no default)
ID: RMNFR3 - Category: Composition - RP Collaboration
Module owners (Microsoft FTEs) SHOULD reach out to the respective Resource Provider teams to build a partnership and collaboration on the modules creation, existence and long term maintenance.
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoft’s privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
To enable telemetry data collection for Terraform modules, the modtm telemetry provider MUST be used. This lightweight telemetry provider sends telemetry data to Azure Application Insights via a HTTP POST front end service.
The modtm telemetry provider is included in all Terraform modules and is enabled by default through the main.telemetry.tf file being automatically distributed from the template repo.
The modtm provider MUST be listed under the required_providers section in the module’s terraform.tf file using the following entry. This is also validated by the linter.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
This general specification can be modified for some use-cases, that are language specific:
Bicep
For cross-references in resource modules, the spec BCPFR7 also applies.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
You cannot specify the patch version for terraform modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. 👍
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
➕ Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
Deployment tests are an important part of a module’s validation and a staple of AVM’s CI environment. However, there are situations where certain e2e-test-deployments cannot be performed against AVM’s test environment (e.g., if a special configuration/registration (such as certain AI models) is required). For these cases, the CI offers the possibility to ‘skip’ specific test cases by placing a file named .e2eignore in their test folder.
Note
A skipped test case is still added to the ‘Usage Examples’ section of the module’s readme and should be manually validated in regular intervals.
Details for use in E2E tests
You MUST add a note to the tests metadata description, which explains the excemption.
If you require that a test is skipped and add an “.e2eignore” file (e.g. \<module\>/tests/e2e/\<testname\>/.e2eignore) to a pull request, a member of the AVM Core Technical Bicep Team must approve set pull request. The content of the file is logged the module’s workflow runs and transparently communicates why the test case is skipped during the deployment validation stage. It iss hence important to specify the reason for skipping the deployment in this file.
Sample filecontent:
The test is skipped, as only one instance of this service can be deployed to a subscription.
Note
For resource modules, the ‘defaults’ and ‘waf-aligned’ tests can’t be skipped.
The deployment of a test can be skipped by adding a .e2eignore file into a test folder (e.g. /examples/<testname>).
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for terraform modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of terraform modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://zojovano.github.io/azure-verified-modules-copy/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://zojovano.github.io/azure-verified-modules-copy/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
In AVM there will be multiple different teams involved throughout the initiatives lifecycle and ongoing long-term support. These teams will be listed below alongside their definitions.
Important
Individuals can be members of multiple teams, at once, that are defined below.
Managing the AVM Solution: Leading and managing AVM from a technical standpoint, ensuring the maintenance and growth of the Public Bicep Registry’s repository and the Terraform Registry. Governing the lifecycle and support SLAs for all AVM modules, as well as providing overall governance and overseeing/facilitating the contribution process.
Testing and quality enforcement: Developing, operating and enforcing the test framework and related tooling with all its quality gates. Providing initial reviews for all modules, making sure all standards are met.
Documentation: Defining and refining principles, contribution and consumption guidelines, specifications and procedures related to AVM modules, maintaining and publishing all related documentation on the program’s public website.
Community Engagement: Organizing internal and external events, such as hackathons, office hours, community calls and training events for current and future module owners and contributors. Presenting in live events both publicly and internally; publishing blog posts and videos on YouTube, etc.
Security Enhancements: Facilitating the implementation and/or implementing security enhancements across all AVM repositories - through the WAF (Well-Architected Framework) framework.
Supporting Module Owners: Providing day-to-day support for module owners, helping troubleshoot and manage security fixes for orphaned modules.
Improving Processes and Gathering Insights: Improving automation for issue triage and management processes and lead the development of internal dashboards to gain insights into contribution and consumption metrics.
Undefined tasks: Anything else not defined below for another team or in the RACI 👍
The team includes both technical and non-technical team members who are all Microsoft FTEs.
Module Owners
Important
Today, module owners MUST be Microsoft FTEs. This is to ensure that within AVM the long-term support for each module can be upheld and honoured.
Module owners are responsible for:
Initial module development
Module Maintenance (proactive & reactive)
Regular updates to ensure compatibility with the latest Azure services (including supporting new API versions and referencing the newest AVM modules when applicable).
WAF Reliability & Security alignment
Bug fixes, security patches and feature improvements.
Ensuring long term compliance with AVM specifications
Implementing and improving automated testing and validation tools for new modules.
The Azure Bicep & Terraform Product Groups are responsible for:
Backup/Additional support for orphaned modules to the AVM Core Team
Providing inputs and feedback on AVM
Taking on feedback and feature requests on their products, Bicep & Terraform, from AVM usage
Note
We are investigating working with all Azure Product Groups as a future investment area that they take on ownership, or contribute to, the AVM modules for their service/product.
RACI
RACI Definition
R = Responsible – Those who do the work to complete the task/responsibility.
A = Accountable – The one answerable for the correct and thorough completion of the task. There must be only one accountable person per task/responsibility. Typically has ‘sign-off’.
C = Consulted – Those whose opinions are sought.
I = Informed – Those who are kept up to date on progress.
The below table defines a RACI to be adopted by all parties referenced in the table to ensure customers can trust these modules and can consume and contribute to the initiative at scale.
Action/Task/Responsibility
Module Owners
Module Contributors
AVM Core Team
Product Groups
Notes
Build/Construct an AVM Module
R, A
R, C
C, I
I
Publish a Bicep AVM Module to the Bicep Public Registry
R, A
C, I
C, I
I
Publish a Terraform AVM Module to the Terraform Registry
R, A
C, I
C, I
I
Manage and maintain tooling/testing frameworks pertaining to module quality
C, I
C, I
R, A
C, I
Manage/run the AVM central backlog (module proposals, orphaned modules, test enhancements, etc.)
Module owners MAY cross-references other modules to build either Resource or Pattern modules. However, they MUST be referenced only by a HashiCorp Terraform registry reference to a pinned version e.g.,
Authors SHOULD NOT output entire resource objects as these may contain sensitive outputs and the schema can change with API or provider versions. Instead, authors SHOULD output the computed attributes of the resource as discreet outputs. This kind of pattern protects against provider schema changes and is known as an anti-corruption layer.
Remember, you SHOULD NOT output values that are already inputs (other than name).
E.g.,
# Resource output, computed attribute.
output"foo" {
description = "MyResource foo attribute"value = azurerm_resource_myresource.foo}# Resource output for resources that are deployed using `for_each`. Again only computed attributes.
output"childresource_foos" {
description = "MyResource children's foo attributes"value = {
forkey, valueinazurerm_resource_mychildresource:key => value.foo }
}# Output of a sensitive attribute
output"bar" {
description = "MyResource bar attribute"value = azurerm_resource_myresource.barsensitive = true}
Where descriptions for variables and outputs spans multiple lines. The description MAY provide variable input examples for each variable using the HEREDOC format and embedded markdown.
Example:
variable"my_complex_input" {
type = map(object({
param1 = stringparam2 = optional(number, null)
}))
description = <<DESCRIPTION A complex input variable that is a map of objects.
Each object has two attributes:
- `param1`: A required string parameter.
- `param2`: (Optional) An optional number parameter.
Example Input:
```terraform
my_complex_input = {
"object1" = {
param1 = "value1"
param2 = 2
}
"object2" = {
param1 = "value2"
}
}
```
DESCRIPTION }
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet has to be connected to at least one network_security_group. The user SHOULD pass in a security_group_id and ask us to make a connection to an existing security_group, or want us to create a new security group.
The disadvantage of this approach is if the user create a security group directly in the root module and use the id as a variable of the module, the expression which determines the value of count will contain an attribute from another resource, the value of this very attribute is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
For this kind of parameters, wrapping with object type is RECOMMENDED:
variable"security_group" {
type:object({
id = string })
default = null}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object itself can be easily found out if it’s null or not. Since the id of a resource cannot be null, this approach can avoid the situation we are facing in the first example, like the following:
ID: TFNFR14 - Category: Inputs - Not allowed variables
Since Terraform 0.13, count, for_each and depends_on are introduced for modules, module development is significantly simplified. Module’s owners MUST NOT add variables like enabled or module_depends_on to control the entire module’s operation. Boolean feature toggles are acceptable however.
variable used as feature switches SHOULD apply a positive statement, use xxx_enabled instead of xxx_disabled. Avoid double negatives like !xxx_disabled.
Please use xxx_enabled instead of xxx_disabled as name of a variable.
ID: TFNFR17 - Category: Code Style - Variables with Descriptions
The target audience of description is the module users.
For a newly created variable (Eg. variable for switching dynamic block on-off), it’s descriptionSHOULD precisely describe the input parameter’s purpose and the expected data type. descriptionSHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object type variable, description can be composed in HEREDOC format:
variable"kubernetes_cluster_key_management_service" {
type:object({
key_vault_key_id = stringkey_vault_network_access = optional(string)
})
default = nulldescription = <<-EOT - `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT}
ID: TFNFR19 - Category: Code Style - Sensitive Data Variables
If variable’s type is object and contains one or more fields that would be assigned to a sensitive argument, then this whole variableSHOULD be declared as sensitive = true, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true.
Nullable SHOULD be set to false for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
Sometimes we will find names for some variable are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable to an independent deprecated_variables.tf file, then redefine the new parameter in variable.tf and make sure it’s compatible everywhere else.
Deprecated variableMUST be annotated as DEPRECATED at the beginning of the description, at the same time the replacement’s name SHOULD be declared. E.g.,
variable"enable_network_security_group" {
type = stringdefault = nulldescription = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."}
A cleanup of deprecated_variables.tfSHOULD be performed during a major version release.
The terraform.tf file MUST only contain one terraform block.
The first line of the terraform block MUST define a required_version property for the Terraform CLI.
The required_version property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
ID: TFNFR26 - Category: Code Style - Providers in required_providers
The terraform block in terraform.tfMUST contain the required_providers block.
Each provider used directly in the module MUST be specified with the source and version properties. Providers in the required_providers block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf file.
The source property MUST be in the format of namespace/name. If this is not explicitly specified, it can cause failure.
The version property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
By rules, in the module code providerMUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider(Eg. manipulating resources across different locations or accounts), you MUST declare configuration_aliases in terraform.required_providers. See details in this document.
provider block declared in the module MUST only be used to differentiate instances used in resource and data. Declaration of fields other than alias in provider block is strictly forbidden. It could lead to module users unable to utilize count, for_each or depends_on. Configurations of the provider instance SHOULD be passed in by the module users.
Module owners MUST set a branch protection policy on their GitHub Repositories for AVM modules against their default branch, typically main, to do the following:
Requires a Pull Request before merging
Require approval of the most recent reviewable push
Dismiss stale pull request approvals when new commits are pushed
Require linear history
Prevents force pushes
Not allow deletions
Require CODEOWNERS review
Do not allow bypassing the above settings
Above settings MUST also be enforced to administrators
Tip
If you use the template repository as mentioned in the contribution guide, the above will automatically be set.
Sometimes we notice that the name of certain output is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf file, then redefine a new output in output.tf and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf and other logics related to compatibility during a major version upgrade.
ID: TFNFR31 - Category: Code Style - locals.tf for Locals Only
In locals.tf, file we could declare multiple locals blocks, but only locals blocks are allowed.
You MAY declare locals blocks next to a resource block or data block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources in provider block is true. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some Azure Policy Remediation Tasks in the future, for a robust testing environment, prevent_deletion_if_contains_resourcesSHOULD be explicitly set to false.
newres is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf and main.tf files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres when they’re trying to add new resource block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable blocks in an empty folder, then copy-paste the parts they need with essential refactoring.
The module’s owners MUST use map(xxx) or set(xxx) as resource’s for_each collection, the map’s key or set’s element MUST be static literals.
Good example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
There are 3 types of assignment statements in a resource or data block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =: