Azure security baseline for Azure Data Factory

This security baseline applies guidance from the Azure Security Benchmark version2.0 to Azure Data Factory. The Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure.The content is grouped by the security controls defined by the Azure Security Benchmark and the related guidance applicable to Azure Data Factory.

Note

Controls not applicable to Azure Data Factory, and those for which the global guidance is recommended verbatim, have been excluded. To see how Azure Data Factory completely maps to the Azure Security Benchmark, see the full Azure Data Factory security baseline mapping file.

Network Security

For more information, see the Azure Security Benchmark: Network Security.

NS-1: Implement security for internal traffic

Guidance: When you deploy Data Factory resources, create or use an existing virtual network. Ensure that all Azure virtual networks follow an enterprise segmentation principle that aligns with the business risks. Any system that might incur higher risk for the organization should be isolated within its own virtual network and sufficiently secured with a network security group (NSG) and/or Azure Firewall.

Use Azure Security Center Adaptive Network Hardening to recommend network security group configurations that limit ports and source IPs based with the reference to external network traffic rules.

Azure-SSIS Integration runtime supports virtual network injection (customer's virtual network). It abides by all NSG and firewall rules set by the customer in their virtual network. When creating an Azure-SSIS Integration Runtime (IR), you have the option to join it with a virtual network. This will allow Azure Data Factory to create certain network resources, such as a network security group (NSG) and a load balancer. You also have the ability to provide your own static public IP address or have Azure Data Factory create one for you. On the NSG that is automatically created by Azure Data Factory, port 3389 is open to all traffic by default. Lock this down to ensure that only your administrators have access.

Self-hosted integration runtime can also be set up on IaaS VM within the customer's virtual network and the network traffic is also governed by the customer's NSG and firewall settings.

Based on your applications and enterprise segmentation strategy, restrict or allow traffic between internal resources based on your network security group rules. For specific, well-defined applications (such as a 3-tier app), this can be a highly secure deny by default

Responsibility: Customer

Azure Security Center monitoring: None

NS-3: Establish private network access to Azure services

Guidance: Use Azure Private Link to enable private access to Data Factory from your virtual networks without crossing the internet.

Private access is an additional defense in depth measure to the authentication and traffic security offered by Azure services.

You can configure private endpoints in the Azure Data Factory Managed Virtual Network to connect to data stores privately. /azure/data-factory/managed-virtual-network-private-endpoint#managed-private-endpoints

Data Factory does not provide the capability to configure Virtual Network service endpoints.

When creating an Azure-SSIS Integration Runtime (IR), you have the option to join it with a virtual network. This will allow Azure Data Factory to create certain network resources, such as a network security group (NSG) and a load balancer. You also have the ability to provide your own static public IP address or have Azure Data Factory create one for you. On the NSG that is automatically created by Azure Data Factory, port 3389 is open to all traffic by default. Lock this down to ensure that only your administrators have access.

Self-Hosted IRs can be deployed on an on-premises machine or Azure virtual machine inside a virtual network. Ensure that your virtual network subnet deployment has an NSG configured to allow only administrative access. Azure-SSIS IR has disallowed port 3389 outbound by default at windows firewall rule on each IR node for protection. You can secure your virtual network-configured resources by associating an NSG with the subnet and setting strict rules.

Responsibility: Customer

Azure Security Center monitoring: None

NS-4: Protect applications and services from external network attacks

Guidance: Protect your Data Factory resources against attacks from external networks, including distributed denial of service (DDoS) attacks, application-specific attacks, and unsolicited and potentially malicious internet traffic. Use Azure Firewall to protect applications and services against potentially malicious traffic from the internet and other external locations. Protect your assets against DDoS attacks by enabling DDoS standard protection on your Azure virtual networks. Use Azure Security Center to detect misconfiguration risks to your network related resources.

Data Factory is not intended to run web applications, and does not require you to configure any additional settings or deploy any extra network services to protect it from external network attacks targeting web applications.

Responsibility: Customer

Azure Security Center monitoring: None

NS-5: Deploy intrusion detection/intrusion prevention systems (IDS/IPS)

Guidance: Use Azure Firewall threat intelligence-based filtering to alert on and/or block traffic to and from known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed. When payload inspection is required, you can deploy a third-party intrusion detection/intrusion prevention system (IDS/IPS) from Azure Marketplace with payload inspection capabilities. Alternately, you can use host-based IDS/IPS or a host-based endpoint detection and response (EDR) solution in conjunction with or instead of network-based IDS/IPS.

Responsibility: Customer

Azure Security Center monitoring: None

NS-6: Simplify network security rules

Guidance: Use Azure Virtual Network Service Tags to define network access controls on network security groups or Azure Firewall configured for your Data Factory resources. You can use service tags in place of specific IP addresses when creating security rules. By specifying the service tag name in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change.

Azure integration runtime's IP range is listed under service tags. It is applicable to Data movement (copy), external and pipeline activities but not to Data flows executions.

Responsibility: Customer

Azure Security Center monitoring: None

NS-7: Secure Domain Name Service (DNS)

Guidance: Follow the best practices for DNS security to mitigate against common attacks like dangling DNS, DNS amplifications attacks, DNS poisoning and spoofing, etc.

When Azure DNS is used as your authoritative DNS service, ensure DNS zones and records are protected from accidental or malicious modification using Azure RBAC and resource locks.

When using Managed Virtual Network in Data Factory, the connectivity through the private endpoints requires DNS updates. It is managed by Microsoft and customer can specify the DNS entries while creating the private endpoints in the managed virtual network in data factory.

Responsibility: Customer

Azure Security Center monitoring: None

Identity Management

For more information, see the Azure Security Benchmark: Identity Management.

IM-1: Standardize Azure Active Directory as the central identity and authentication system

Guidance: Data Factory uses Azure Active Directory (Azure AD) as the default identity and access management service. You should standardize Azure AD to govern your organization's identity and access management in:

  • Microsoft Cloud resources, such as the Azure portal, Azure Storage, Azure Virtual Machine (Linux and Windows), Azure Key Vault, PaaS, and SaaS applications.
  • Your organization's resources, such as applications on Azure or your corporate network resources.

Securing Azure AD should be a high priority in your organization's cloud security practice. Azure AD provides an identity secure score to help you assess identity security posture relative to Microsoft's best practice recommendations. Use the score to gauge how closely your configuration matches best practice recommendations, and to make improvements in your security posture.

Note: Azure AD supports external identities that allow users without a Microsoft account to sign in to their applications and resources with their external identity.

Membership of the Data Factory Contributor role lets users do the following things:

  • Create, edit, and delete data factories and child resources including datasets, linked services, pipelines, triggers, and integration runtimes.
  • Deploy Resource Manager templates. Resource Manager deployment is the deployment method used by Data Factory in the Azure portal.
  • Manage App Insights alerts for a data factory.
  • Create support tickets.

If you are running your Self-hosted Integration Runtime (IR) on an Azure Virtual Machine, you can use managed identities to authenticate to any service that supports Azure Active Directory (Azure AD) authentication, including Key Vault, without any credentials in your code. Your code that's running on a virtual machine, can use managed identity to request access tokens for services that support Azure AD authentication.

Data Factory allows you to use Managed identities, Service Principles to authenticate against data stores and compute that support AAD authentication.

Responsibility: Customer

Azure Security Center monitoring: None

IM-2: Manage application identities securely and automatically

Guidance: Data Factory supports managed identities for its Azure resources. Use managed identities with Data Factory instead of creating service principals to access other resources. Data Factory can natively authenticate to the Azure services/resources that supports Azure AD authentication through a pre-defined access grant rule without using credentials hard coded in source code or configuration files.

Data Factory recommends using Azure AD to create a service principal with restricted permissions at the resource level to configure service principals with certificate credentials and fall back to client secrets. In both cases, Azure Key Vault can be used to in conjunction with Azure-managed identities so that the runtime environment (such as an Azure function) can retrieve the credential from the key vault.

Responsibility: Customer

Azure Security Center monitoring: None

IM-3: Use Azure AD single sign-on (SSO) for application access

Guidance: Data Factory uses Azure Active Directory to provide identity and access management to Azure resources, cloud applications, and on-premises applications. This includes enterprise identities, such as employees, as well as external identities like partners, vendors, and suppliers. This enables single sign-on (SSO) to manage and secure access to your organization's data and resources on-premises and in the cloud. Connect all your users, applications, and devices to the Azure AD for seamless, secure access and greater visibility and control.

Responsibility: Customer

Azure Security Center monitoring: None

IM-4: Use strong authentication controls for all Azure Active Directory based access

Guidance: Data Factory uses Azure Active Directory (Azure AD), which supports strong authentication controls through multi-factor authentication (MFA) and strong passwordless methods.

  • Multi-factor authentication - Enable Azure AD MFA, and then follow Azure Security Center Identity and Access Management recommendations for best practices in your MFA setup. MFA can be enforced on all, select users, or at the per-user level based on sign-in conditions and risk factors.
  • Passwordless authentication - Three passwordless authentication options are available: Windows Hello for Business, Microsoft Authenticator app, and on-premises authentication methods such as smart cards.

For administrators and privileged users, ensure the highest level of the strong authentication method is used, followed by rolling out the appropriate strong authentication policy to other users.

Data Factory supports legacy password-based authentication such as Cloud-only accounts (user accounts created directly in the Azure) that have a baseline password policy, or Hybrid accounts (user accounts that come from on-premises Active Directory) that will follow the on-premises password policies. When using password-based authentication, Azure AD provides a password protection capability that prevents users from setting passwords that are easy to guess. Microsoft provides a global list of banned passwords that is updated based on telemetry, and customers can augment the list based on their needs (such as branding or cultural references). This password protection can be used for cloud-only and hybrid accounts.

Note: Authentication based on password credentials alone are susceptible to popular attack methods. For higher security, use strong authentication such as MFA and a strong password policy. For third-party applications and marketplace services that might have default passwords, you should change them upon the service initial setup.

On-premise data store credentials can also be encrypted and stored locally on the self-hosted integration runtime machine.

Responsibility: Customer

Azure Security Center monitoring: None

IM-5: Monitor and alert on account anomalies

Guidance: Data Factory is integrated with Azure Active Directory, which provides the following data sources:

  • Sign-ins - The sign-ins report provides information about the usage of managed applications and user sign-in activities.
  • Audit logs - Provides traceability through logs for all changes done by various features within Azure AD. Examples of audit logs include changes made to any resource within Azure AD, like adding or removing users, apps, groups, roles, and policies.
  • Risky sign-ins - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
  • Users flagged for risk - A risky user is an indicator for a user account that might have been compromised.

These data sources can be integrated with Azure Monitor, Azure Sentinel, or third-party SIEM systems.

Azure Security Center can also alert you about certain suspicious activities, such as an excessive number of failed authentication attempts or deprecated accounts in the subscription.

Azure Advanced Threat Protection (ATP) is a security solution that can use Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions.

If you are running your Self-hosted Integration Runtime on an Azure Virtual Machine (VM), you can, additionally, onboard your VM to Azure Sentinel. Microsoft Azure Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.

Responsibility: Customer

Azure Security Center monitoring: None

IM-6: Restrict Azure resource access based on conditions

Guidance: Data Factory supports Azure Active Directory (Azure AD) conditional access for a more granular access control based on user-defined conditions, such as user logins from certain IP ranges will need to use multifactor authentication for login. Granular authentication session management policy can also be used for different use cases. These conditional access policies will only apply to user accounts that are authenticating to Azure AD to access and manage the Data Factory service, but they will not apply to service principals, keys, or tokens used to connect to your Data Factory resource.

Responsibility: Customer

Azure Security Center monitoring: None

IM-7: Eliminate unintended credential exposure

Guidance: Data Factory allows customers to deploy/run {code or configurations or persisted data} potentially with identities/secrets. It's recommended to implement Credential Scanner to identify credentials within {code or configurations or persisted data}. Credential Scanner will also encourage moving discovered credentials to more secure locations like Azure Key Vault.

For GitHub, you can use the native secret scanning feature to identify credentials or other forms of secrets within the code.

If you're using Data Factory's visual UI-based authoring tool, then all credentials/ secrets stored in linked services are either directly encrypted by the service or can be referenced by runtime using key vault. So no credentials as such will show up in the JSON code. These credentials can never be retrieved by code or visual UI.

Responsibility: Customer

Azure Security Center monitoring: None

IM-8: Secure user access to legacy applications

Guidance: Ensure you have modern access controls and session monitoring for legacy applications and the data they store and process. While VPNs are commonly used to access legacy applications, they often have only basic access control and limited session monitoring.

Azure AD Application Proxy enables you to publish legacy on-premises applications to remote users with SSO while explicitly validating trustworthiness of both remote users and devices with Azure AD Conditional Access.

Alternatively, Microsoft Cloud App Security is a Cloud Access Security Broker (CASB) service that can provide controls for monitoring users application sessions and blocking actions (for both legacy on-premises applications and cloud software as a service (SaaS) applications).

Even if the legacy system does not support integration with Azure AD and you have to use secret/ user credentials to access them, it is advised to store credentials in Azure Key Vault and configure Data Factory to access the secrets during runtime. In this approach secrets are never exposed to data engineers/ data factory user.

Responsibility: Customer

Azure Security Center monitoring: None

Privileged Access

For more information, see the Azure Security Benchmark: Privileged Access.

PA-1: Protect and limit highly privileged users

Guidance: The most critical built-in roles for Azure AD are the Global Administrator and the Privileged Role Administrator, as users assigned to these two roles can delegate administrator roles:

  • Global Administrator / Company Administrator: Users with this role have access to all administrative features in Azure AD, as well as services that use Azure AD identities.
  • Privileged Role Administrator: Users with this role can manage role assignments in Azure AD, as well as within Azure AD Privileged Identity Management (PIM). In addition, this role allows the management of all aspects of PIM and administrative units.

Note: You might have other critical roles that need to be governed if you use custom roles with certain privileged permissions assigned. You might also want to apply similar controls to the administrator account of critical business assets.

You should limit the number of highly privileged accounts or roles and protect these accounts at an elevated level. Users with this privilege can directly or indirectly read and modify every resource in your Azure environment.

You can enable just-in-time (JIT) privileged access to Azure resources and Azure AD using Azure AD PIM. JIT grants temporary permissions to perform privileged tasks only when users need it. PIM can also generate security alerts when there is a suspicious or unsafe activity in your Azure AD organization.

Data Factory Contributor is an Azure AD built-in role that provides full access to Data Factory instance. Consider creating a 'Custom role' in case you want to provide less privileged permissions or restrict certain functionalities within Data Factory from such users.

If you are running your Self-hosted Integration Runtime on an Azure Virtual Machine (VM), you can, additionally, onboard your VM to Azure Sentinel. Microsoft Azure Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.

/azure/data-factory/concepts-roles-permissions#custom-scenarios-and-custom-roles

Responsibility: Customer

Azure Security Center monitoring: None

PA-2: Restrict administrative access to business-critical systems

Guidance: Data Factory uses Azure role-based access control (Azure RBAC) to isolate access to business-critical systems by restricting which accounts are granted privileged access to the subscriptions and management groups they are in.

Ensure that you also restrict access to the management, identity, and security systems that have administrative access to your business-critical access controls, such as Active Directory Domain Controllers (DCs), security tools, and system management tools with agents installed on business-critical systems. Attackers who compromise these management and security systems can immediately weaponize them to compromise business-critical assets.

All types of access controls should be aligned to your enterprise segmentation strategy to ensure consistent access control.

Responsibility: Customer

Azure Security Center monitoring: None

PA-3: Review and reconcile user access regularly

Guidance: Data Factory uses Azure Active Directory (Azure AD) accounts to manage its resources, review user accounts, and access assignments regularly to ensure the accounts and their access are valid. You can use Azure AD and access reviews to review group memberships, access to enterprise applications, and role assignments. Azure AD reporting can provide logs to help discover stale accounts. You can also use Azure AD Privileged Identity Management (PIM) to create access review report workflows to facilitate the review process.

If you are running your Self-hosted Integration Runtime in an Azure Virtual Machine, you will need to review the local security groups and users to make sure that there are no unexpected accounts that could compromise the system.

In addition, Azure AD PIM can also be configured to alert you when an excessive number of administrator accounts are created and to identify administrator accounts that are stale or improperly configured.

Note: Some Azure services support local users and roles which are not managed through Azure AD. You will need to manage these users separately.

Roles and permissions for Azure Data Factory /azure/data-factory/concepts-roles-permissions

Responsibility: Customer

Azure Security Center monitoring: None

PA-4: Set up emergency access in Azure AD

Guidance: Data Factory uses Azure Active Directory (Azure AD) to manage its resources. To prevent being accidentally locked out of your Azure AD organization, set up an emergency access account for access when normal administrative accounts cannot be used. Emergency access accounts are usually highly privileged, and they should not be assigned to specific individuals. Emergency access accounts are limited to emergency or 'break glass' scenarios where normal administrative accounts can't be used.

You should ensure that the credentials (such as password, certificate, or smart card) for emergency access accounts are kept secure and known only to individuals who are authorized to use them only in an emergency.

Responsibility: Customer

Azure Security Center monitoring: None

PA-5: Automate entitlement management

Guidance: Data Factory is integrated with Azure Active Directory (Azure AD) to manage its resources. Use Azure AD entitlement management features to automate access request workflows, including access assignments, reviews, and expiration. Dual or multi-stage approval is also supported.

Responsibility: Customer

Azure Security Center monitoring: None

PA-6: Use privileged access workstations

Guidance: Secured, isolated workstations are critically important for the security of sensitive roles like administrator, developer, and critical service operator. Use highly secured user workstations and/or Azure Bastion for administrative tasks. Use Azure Active Directory (Azure AD), Microsoft Defender Advanced Threat Protection (ATP), and/or Microsoft Intune to deploy a secure and managed user workstation for administrative tasks. The secured workstations can be centrally managed to enforce secured configuration including strong authentication, software and hardware baselines, and restricted logical and network access.

Responsibility: Customer

Azure Security Center monitoring: None

PA-7: Follow just enough administration (least privilege principle)

Guidance: Data Factory is integrated with Azure role-based access control (Azure RBAC) to manage its resources. Azure RBAC allows you to manage Azure resource access through role assignments. You can assign these roles to users, groups service principals, and managed identities. There are pre-defined built-in roles for certain resources, and these roles can be inventoried or queried through tools such as Azure CLI, Azure PowerShell, or the Azure portal. The privileges you assign to resources through the Azure RBAC should be always limited to what is required by the roles. This complements the just-in-time (JIT) approach of Azure AD Privileged Identity Management (PIM) and should be reviewed periodically.

Use built-in roles to allocate permissions and only create custom roles when required.

You can create a custom role in Azure AD with more restrictive access to Data Factory.

Responsibility: Customer

Azure Security Center monitoring: None

Data Protection

For more information, see the Azure Security Benchmark: Data Protection.

DP-2: Protect sensitive data

Guidance: Protect sensitive data by restricting access using Azure role-based access control (Azure RBAC), network-based access controls, and specific controls in Azure services (such as encryption).

To ensure consistent access control, all types of access control should be aligned with your enterprise segmentation strategy. The enterprise segmentation strategy should also be informed by the location of sensitive or business critical data and systems.

For the underlying platform (managed by Microsoft), Microsoft treats all customer content as sensitive and guards against customer data loss and exposure. To ensure customer data within Azure remains secure, Microsoft has implemented some default data protection controls and capabilities.

Responsibility: Customer

Azure Security Center monitoring: None

DP-4: Encrypt sensitive information in transit

Guidance: To complement access controls, data in transit should be protected against 'out of band' attacks (such as traffic capture) using encryption to ensure that attackers cannot easily read or modify the data.

Data Factory supports data encryption in transit with TLS v1.2 or greater.

While this is optional for traffic on private networks, this is critical for traffic on external and public networks. For HTTP traffic, ensure that any clients connecting to your Azure resources can negotiate TLS v1.2 or greater. For remote management, use SSH (for Linux) or RDP/TLS (for Windows) instead of an unencrypted protocol. Obsolete SSL, TLS, SSH versions and protocols, and weak ciphers should be disabled.

By default, Azure provides encryption for data in transit between Azure data centers.

Responsibility: Customer

Azure Security Center monitoring: None

DP-5: Encrypt sensitive data at rest

Guidance: To complement access controls, Data Factory encrypts data at rest to protect against 'out of band' attacks (such as accessing underlying storage) using encryption. This helps ensure that attackers cannot easily read or modify the data.

Azure provides encryption for data at rest by default. For highly sensitive data, you have options to implement additional encryption at rest on all Azure resources where available. Azure manages your encryption keys by default, but Azure also provides options to manage your own keys (customer-managed keys) for certain Azure services to meet regulatory requirements.

Responsibility: Customer

Azure Security Center monitoring: None

Asset Management

For more information, see the Azure Security Benchmark: Asset Management.

AM-1: Ensure security team has visibility into risks for assets

Guidance: Ensure security teams are granted Security Reader permissions in your Azure tenant and subscriptions so they can monitor for security risks using Azure Security Center.

Depending on how security team responsibilities are structured, monitoring for security risks could be the responsibility of a central security team or a local team. That said, security insights and risks must always be aggregated centrally within an organization.

Security Reader permissions can be applied broadly to an entire tenant (Root Management Group) or scoped to management groups or specific subscriptions.

Note: Additional permissions might be required to get visibility into workloads and services.

Responsibility: Customer

Azure Security Center monitoring: None

AM-2: Ensure security team has access to asset inventory and metadata

Guidance: Ensure that security teams have access to a continuously updated inventory of assets on Azure, like Data Factory. Security teams often need this inventory to evaluate their organization's potential exposure to emerging risks, and as an input to continuous security improvements. Create an Azure Active Directory (Azure AD) group to contain your organization's authorized security team and assign them read access to all Data Factory resources, which can be simplified by a single high-level role assignment within your subscription.

Apply tags to your Azure resources, resource groups, and subscriptions to logically organize them into a taxonomy. Each tag consists of a name and a value pair. For example, you can apply the name "Environment" and the value "Production" to all the resources in production.

Use Azure Virtual Machine Inventory to automate the collection of information about software on Virtual Machines. Software Name, Version, Publisher, and Refresh Time are available from the Azure portal. To get access to install dates and other information, enable guest-level diagnostics and bring the Windows Event Logs into a Log Analytics Workspace.

Data Factory does not allow running an application or the installation of software on its resources. Describe any other features in your offering which allows or supports this functionality, as applicable.

Responsibility: Customer

Azure Security Center monitoring: None

AM-3: Use only approved Azure services

Guidance: Use Azure Policy to audit and restrict which services users can provision in your environment. Use Azure Resource Graph to query for and discover resources within their subscriptions. You can also use Azure Monitor to create rules to trigger alerts when a non-approved service is detected.

Responsibility: Customer

Azure Security Center monitoring: None

AM-4: Ensure security of asset lifecycle management

Guidance: Establish or update security policies that address asset lifecycle management processes for potentially high impact modifications. These modifications include but are not limited to: identity providers and access, data sensitivity, network configuration, and administrative privilege assignment. Outline any high-impact configurations that the customer should be aware of.

Remove Azure resources when they are no longer needed.

Responsibility: Customer

Azure Security Center monitoring: None

AM-5: Limit users' ability to interact with Azure Resource Manager

Guidance: Use Azure Conditional Access to limit users' ability to interact with Azure Resources Manager by configuring "Block access" for the "Microsoft Azure Management" App.

Responsibility: Customer

Azure Security Center monitoring: None

AM-6: Use only approved applications in compute resources

Guidance: If you are running your Self-hosted Integration Runtime in an Azure Virtual Machine, Azure Automation provides complete control during deployment, operations, and decommissioning of workloads and resources. You may use Change Tracking to identify all software installed on Virtual Machines. You can implement your own process or use Azure Automation State Configuration for removing unauthorized software.

Note that this only applies if your Self-hosted Integration Runtime is running in an Azure Virtual Machine.

Responsibility: Customer

Azure Security Center monitoring: None

Logging and Threat Detection

For more information, see the Azure Security Benchmark: Logging and Threat Detection.

LT-1: Enable threat detection for Azure resources

Guidance: Use the Azure Security Center built-in threat detection capability and enable Azure Defender (formerly Azure Advanced Threat Protection) for your Data Factory resources. Azure Defender for Data Factory provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your Data Factory resources.

Forward any logs from Data Factory to your SIEM, which can be used to set up custom threat detections. Ensure that you are monitoring different types of Azure assets for potential threats and anomalies. Focus on getting high-quality alerts to reduce false positives for analysts to sort through. Alerts can be sourced from log data, agents, or other data.

Responsibility: Customer

Azure Security Center monitoring: None

LT-2: Enable threat detection for Azure identity and access management

Guidance: Azure Active Directory (Azure AD) provides the following user logs, which can be viewed in Azure AD reporting or integrated with Azure Monitor, Azure Sentinel, or other SIEM/monitoring tools for more sophisticated monitoring and analytics use cases:

  • Sign-ins - The sign-ins report provides information about the usage of managed applications and user sign-in activities.
  • Audit logs - Provides traceability through logs for all changes done by various features within Azure AD. Examples of audit logs include changes made to any resources within Azure AD, like adding or removing users, apps, groups, roles, and policies.
  • Risky sign-ins - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account.
  • Users flagged for risk - A risky user is an indicator for a user account that might have been compromised.

Azure Security Center can also trigger alerts on certain suspicious activities, such as excessive number of failed authentication attempts or deprecated accounts in the subscription. In addition to the basic security hygiene monitoring, Azure Security Center's Threat Protection module can also collect more in-depth security alerts from individual Azure compute resources (virtual machines, containers, app service), data resources (SQL DB and storage), and Azure service layers. This capability allows you to have visibility on account anomalies inside individual resources.

Responsibility: Customer

Azure Security Center monitoring: None

LT-3: Enable logging for Azure network activities

Guidance: Enable and collect network security group (NSG) resource logs, NSG flow logs, Azure Firewall logs, and Web Application Firewall (WAF) logs for security analysis to support incident investigations, threat hunting, and security alert generation. You can send the flow logs to an Azure Monitor Log Analytics workspace and then use Traffic Analytics to provide insights.

Data Factory does not produce or process DNS query logs

Responsibility: Customer

Azure Security Center monitoring: None

LT-4: Enable logging for Azure resources

Guidance: Activity logs, which are automatically available, contain all write operations (PUT, POST, DELETE) for your Data Factory resources except read operations (GET). Activity logs can be used to find an error when troubleshooting or to monitor how a user in your organization modified a resource.

Enable Azure resource logs for Data Factory. You can use Azure Security Center and Azure Policy to enable resource logs and log data collecting. These logs can be critical for investigating security incidents and performing forensic exercises.

Responsibility: Customer

Azure Security Center monitoring: None

LT-5: Centralize security log management and analysis

Guidance: Ingest logs via Azure Monitor to aggregate security data generated by Azure Data Factory. Within Azure Monitor, you are able to query the Log Analytics workspace that is configured to receive your Azure Data Factory activity logs. Use Azure Storage Accounts for long-term/archival log storage or event hubs for exporting data to other systems.

Alternatively, you may enable and on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM).You can also integrate Azure Data Factory with Git to leverage several source control benefits, such as the ability to track/audit changes and the ability to revert changes that introduce bugs.

Responsibility: Customer

Azure Security Center monitoring: None

LT-6: Configure log storage retention

Guidance: Enable diagnostic settings for Azure Data Factory. If choosing to store logs in a Log Analytics Workspace, set your Log Analytics Workspace retention period according to your organization's compliance regulations. Use Azure Storage Accounts for long-term/archival storage.

Responsibility: Customer

Azure Security Center monitoring: None

LT-7: Use approved time synchronization sources

Guidance: Not applicable; Data Factory does not support configuring your own time synchronization sources.

Data Factory service relies on Microsoft time synchronization sources, and is not exposed to customers for configuration.

Responsibility: Microsoft

Azure Security Center monitoring: None

Incident Response

For more information, see the Azure Security Benchmark: Incident Response.

IR-1: Preparation – update incident response process for Azure

Guidance: Ensure your organization has processes to respond to security incidents, has updated these processes for Azure, and is regularly exercising them to ensure readiness.

Responsibility: Customer

Azure Security Center monitoring: None

IR-2: Preparation – setup incident notification

Guidance: Set up security incident contact information in Azure Security Center. This contact information is used by Microsoft to contact you if the Microsoft Security Response Center (MSRC) discovers that your data has been accessed by an unlawful or unauthorized party. You also have options to customize incident alert and notification in different Azure services based on your incident response needs.

Responsibility: Customer

Azure Security Center monitoring: None

IR-3: Detection and analysis – create incidents based on high quality alerts

Guidance: Ensure you have a process to create high-quality alerts and measure the quality of alerts. This allows you to learn lessons from past incidents and prioritize alerts for analysts, so they don't waste time on false positives.

High-quality alerts can be built based on experience from past incidents, validated community sources, and tools designed to generate and clean up alerts by fusing and correlating diverse signal sources.

Azure Security Center provides high-quality alerts across many Azure assets. You can use the ASC data connector to stream the alerts to Azure Sentinel. Azure Sentinel lets you create advanced alert rules to generate incidents automatically for an investigation.

Export your Azure Security Center alerts and recommendations using the export feature to help identify risks to Azure resources. Export alerts and recommendations either manually or in an ongoing, continuous fashion.

Responsibility: Customer

Azure Security Center monitoring: None

IR-4: Detection and analysis – investigate an incident

Guidance: Ensure analysts can query and use diverse data sources as they investigate potential incidents, to build a full view of what happened. Diverse logs should be collected to track the activities of a potential attacker across the kill chain to avoid blind spots. You should also ensure insights and learnings are captured for other analysts and for future historical reference.

The data sources for investigation include the centralized logging sources that are already being collected from the in-scope services and running systems, but can also include:

  • Network data - use network security groups' flow logs, Azure Network Watcher, and Azure Monitor to capture network flow logs and other analytics information.

  • Snapshots of running systems:

    • Use Azure virtual machine's snapshot capability to create a snapshot of the running system's disk.

    • Use the operating system's native memory dump capability to create a snapshot of the running system's memory.

    • Use the snapshot feature of the Azure services or your software's own capability to create snapshots of the running systems.

Azure Sentinel provides extensive data analytics across virtually any log source and a case management portal to manage the full lifecycle of incidents. Intelligence information during an investigation can be associated with an incident for tracking and reporting purposes.

Responsibility: Customer

Azure Security Center monitoring: None

IR-5: Detection and analysis – prioritize incidents

Guidance: Provide context to analysts on which incidents to focus on first based on alert severity and asset sensitivity.

Azure Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated first. The severity is based on how confident Security Center is in the finding or the analytics used to issue the alert, as well as the confidence level that there was malicious intent behind the activity that led to the alert.

Additionally, mark resources using tags and create a naming system to identify and categorize Azure resources, especially those processing sensitive data. It is your responsibility to prioritize the remediation of alerts based on the criticality of the Azure resources and environment where the incident occurred.

Responsibility: Customer

Azure Security Center monitoring: None

IR-6: Containment, eradication and recovery – automate the incident handling

Guidance: Automate manual repetitive tasks to speed up response time and reduce the burden on analysts. Manual tasks take longer to execute, slowing each incident and reducing how many incidents an analyst can handle. Manual tasks also increase analyst fatigue, which increases the risk of human error that causes delays, and degrades the ability of analysts to focus effectively on complex tasks.

Use workflow automation features in Azure Security Center and Azure Sentinel to automatically trigger actions or run a playbook to respond to incoming security alerts. The playbook takes actions, such as sending notifications, disabling accounts, and isolating problematic networks.

Responsibility: Customer

Azure Security Center monitoring: None

Posture and Vulnerability Management

For more information, see the Azure Security Benchmark: Posture and Vulnerability Management.

PV-1: Establish secure configurations for Azure services

Guidance: Define and implement standard security configurations for Azure Data Factory with Azure Policy. Use Azure Policy aliases in the "Microsoft.DataFactory" namespace to create custom policies to audit or enforce the configuration of your Azure Data Factory instances.

Responsibility: Customer

Azure Security Center monitoring: None

PV-2: Sustain secure configurations for Azure services

Guidance: Use Azure Policy [deny] and [deploy if not exist] to enforce security settings across your Azure resources.

Responsibility: Customer

Azure Security Center monitoring: None

PV-3: Establish secure configurations for compute resources

Guidance: Use Azure Security Center and Azure Policy to establish secure configurations on all Self-Hosted Integration Runtime running on Azure virtual machines and containers.

Responsibility: Customer

Azure Security Center monitoring: None

PV-4: Sustain secure configurations for compute resources

Guidance: If you are running your Self-hosted Integration Runtime in an Azure Virtual Machine (VM), note that there are several options for maintaining a secure configuration for VMs for deployment:

  • Azure Resource Manager templates: These are JSON-based files used to deploy a VM from the Azure portal, and a custom template will need to be maintained. Microsoft performs the maintenance on the base templates.
  • Custom Virtual hard disk (VHD): In some circumstances, it may be required to have custom VHD files used such as when dealing with complex environments that cannot be managed through other means.
  • Azure Automation State Configuration: Once the base OS is deployed, this can be used for more granular control of the settings, and enforced through the automation framework.

For most scenarios, the Microsoft base VM templates combined with the Azure Automation Desired State Configuration can assist in meeting and maintaining the security requirements.

Responsibility: Customer

Azure Security Center monitoring: None

PV-6: Perform software vulnerability assessments

Guidance: Not applicable; Microsoft performs vulnerability management on the underlying systems that support Data Factory.

Responsibility: Microsoft

Azure Security Center monitoring: None

PV-7: Rapidly and automatically remediate software vulnerabilities

Guidance: If you are running your Self-hosted Integration Runtime in an Azure Virtual Machine (VM), use the Azure Update Management solution to manage updates and patches for your VMs. Update Management relies on the locally configured update repository to patch supported Windows systems. Tools like System Center Updates Publisher (Updates Publisher) allow you to publish custom updates into Windows Server Update Services (WSUS). This scenario allows Update Management to patch machines that use Configuration Manager as their update repository with third-party software.

If you running your Self-hosted Integration Runtime in an Azure Virtual Machine, you can use the native vulnerability scanner. The vulnerability scanner included with Azure Security Center is powered by Qualys. Qualys's scanner is the leading tool for real-time identification of vulnerabilities in your Azure Virtual Machines.

When Security Center identifies vulnerabilities, it presents findings and related information as recommendations. The related information includes remediation steps, related CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific virtual machine.

Responsibility: Customer

Azure Security Center monitoring: None

PV-8: Conduct regular attack simulation

Guidance: As required, conduct penetration testing or red team activities on your Azure resources and ensure remediation of all critical security findings.

Follow the Microsoft Cloud Penetration Testing Rules of Engagement to ensure your penetration tests are not in violation of Microsoft policies. Use Microsoft's strategy and execution of Red Teaming and live site penetration testing against Microsoft-managed cloud infrastructure, services, and applications.

Responsibility: Customer

Azure Security Center monitoring: None

Endpoint Security

For more information, see the Azure Security Benchmark: Endpoint Security.

ES-2: Use centrally managed modern anti-malware software

Guidance: If you are running your Self-hosted Integration Runtime in an Azure Virtual Machine, you can use Microsoft Antimalware for Azure Windows Virtual Machines to continuously monitor and defend your resources.

Responsibility: Customer

Azure Security Center monitoring: None

ES-3: Ensure anti-malware software and signatures are updated

Guidance: When Self-hosted integration runtime is deployed in Azure VM, Microsoft Antimalware for Azure will automatically install the latest signature, platform, and engine updates by default. Follow recommendations in Azure Security Center: "Compute & Apps" to ensure all endpoints are up to date with the latest signatures. The Windows OS can be further protected with additional security to limit the risk of virus or malware-based attacks with the Microsoft Defender Advanced Threat Protection service that integrates with Azure Security Center.

Responsibility: Customer

Azure Security Center monitoring: None

Backup and Recovery

For more information, see the Azure Security Benchmark: Backup and Recovery.

BR-1: Ensure regular automated backups

Guidance: If you are running Self-Hosted Integration Runtimes on virtual machines, enable Azure Backup and configure the VM, as well as the desired frequency and retention period for automatic backups. To backup all code on Azure Data Factory leverage source control functionality in Data Factory.

Responsibility: Shared

Azure Security Center monitoring: None

BR-2: Encrypt backup data

Guidance: If you are running your Self-Hosted Integration Runtime on Azure Virtual Machines, enable Azure Backup and target Azure VMs, as well as the desired frequency and retention periods. These virtual machines can be backed up using customer-managed keys within Azure Key Vault.

Responsibility: Shared

Azure Security Center monitoring: None

BR-3: Validate all backups including customer-managed keys

Guidance: If you are running Self-Hosted Integration Runtime on Azure Virtual Machines, ensure the ability to periodically perform data restoration of content within Azure Backup. If necessary, test restore content to an isolated network. Periodically test restoration of backed up customer-managed keys.

Responsibility: Shared

Azure Security Center monitoring: None

BR-4: Mitigate risk of lost keys

Guidance: Ensure you have measures in place to prevent and recover from loss of keys used to encrypt Azure Data Factory metadata. Enable soft delete and purge protection in Azure Key Vault storing the encryption keys for Data Factory to protect keys against accidental or malicious deletion.

Responsibility: Shared

Azure Security Center monitoring: None

Governance and Strategy

For more information, see the Azure Security Benchmark: Governance and Strategy.

GS-1: Define asset management and data protection strategy

Guidance: Ensure you document and communicate a clear strategy for continuous monitoring and protection of systems and data. Prioritize discovery, assessment, protection, and monitoring of business-critical data and systems.

This strategy should include documented guidance, policy, and standards for the following elements:

  • Data classification standard in accordance with the business risks
  • Security organization visibility into risks and asset inventory
  • Security organization approval of Azure services for use
  • Security of assets through their lifecycle
  • Required access control strategy in accordance with organizational data classification
  • Use of Azure native and third-party data protection capabilities
  • Data encryption requirements for in-transit and at-rest use cases
  • Appropriate cryptographic standards

For more information, see the following references:

Responsibility: Customer

Azure Security Center monitoring: None

GS-2: Define enterprise segmentation strategy

Guidance: Establish an enterprise-wide strategy to segmenting access to assets using a combination of identity, network, application, subscription, management group, and other controls.

Carefully balance the need for security separation with the need to enable daily operation of the systems that need to communicate with each other and access data.

Ensure that the segmentation strategy is implemented consistently across control types including network security, identity and access models, and application permission/access models, and human process controls.

Responsibility: Customer

Azure Security Center monitoring: None

GS-3: Define security posture management strategy

Guidance: Continuously measure and mitigate risks to your individual assets and the environment they are hosted in. Prioritize high value assets and highly-exposed attack surfaces, such as published applications, network ingress and egress points, user and administrator endpoints, etc.

Responsibility: Customer

Azure Security Center monitoring: None

GS-4: Align organization roles, responsibilities, and accountabilities

Guidance: Ensure that you document and communicate a clear strategy for roles and responsibilities in your security organization. Prioritize providing clear accountability for security decisions, educating everyone on the shared responsibility model, and educate technical teams on technology to secure the cloud.

Responsibility: Customer

Azure Security Center monitoring: None

GS-5: Define network security strategy

Guidance: Establish an Azure network security approach as part of your organization's overall security access control strategy.

This strategy should include documented guidance, policy, and standards for the following elements:

  • Centralized network management and security responsibility
  • Virtual network segmentation model aligned with the enterprise segmentation strategy
  • Remediation strategy in different threat and attack scenarios
  • Internet edge and ingress and egress strategy
  • Hybrid cloud and on-premises interconnectivity strategy
  • Up-to-date network security artifacts (such as network diagrams, reference network architecture)

For more information, see the following references:

Responsibility: Customer

Azure Security Center monitoring: None

GS-6: Define identity and privileged access strategy

Guidance: Establish an Azure identity and privileged access approaches as part of your organization's overall security access control strategy.

This strategy should include documented guidance, policy, and standards for the following elements:

  • A centralized identity and authentication system and its interconnectivity with other internal and external identity systems
  • Strong authentication methods in different use cases and conditions
  • Protection of highly privileged users
  • Anomaly user activities monitoring and handling
  • User identity and access review and reconciliation process

For more information, see the following references:

Responsibility: Customer

Azure Security Center monitoring: None

GS-7: Define logging and threat response strategy

Guidance: Establish a logging and threat response strategy to rapidly detect and remediate threats while meeting compliance requirements. Prioritize providing analysts with high-quality alerts and seamless experiences so that they can focus on threats rather than integration and manual steps.

This strategy should include documented guidance, policy, and standards for the following elements:

  • The security operations (SecOps) organization's role and responsibilities
  • A well-defined incident response process aligning with NIST or another industry framework
  • Log capture and retention to support threat detection, incident response, and compliance needs
  • Centralized visibility of and correlation information about threats, using SIEM, native Azure capabilities, and other sources
  • Communication and notification plan with your customers, suppliers, and public parties of interest
  • Use of Azure native and third-party platforms for incident handling, such as logging and threat detection, forensics, and attack remediation and eradication
  • Processes for handling incidents and post-incident activities, such as lessons learned and evidence retention

For more information, see the following references:

Responsibility: Customer

Azure Security Center monitoring: None

GS-8: Define backup and recovery strategy

Guidance: Establish an Azure backup and recovery strategy for your organization.

This strategy should include documented guidance, policy, and standards for the following elements:

  • Recovery time objective (RTO) and recovery point objective (RPO) definitions in accordance with your business resiliency objectives
  • Redundancy design in your applications and infrastructure setup
  • Protection of backup using access control and data encryption

For more information, see the following references:

Responsibility: Customer

Azure Security Center monitoring: None

Next steps