Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory

This article outlines how to copy data to and from Azure Data Lake Storage Gen1. To learn about Azure Data Factory, read the introductory article.

Supported capabilities

This Azure Data Lake Storage Gen1 connector is supported for the following activities:

Specifically, with this connector you can:

  • Copy files by using one of the following methods of authentication: service principal or managed identities for Azure resources.
  • Copy files as is or parse or generate files with the supported file formats and compression codecs.

Important

If you copy data by using the self-hosted integration runtime, configure the corporate firewall to allow outbound traffic to <ADLS account name>.azuredatalakestore.net and login.microsoftonline.com/<tenant>/oauth2/token on port 443. The latter is the Azure Security Token Service that the integration runtime needs to communicate with to get the access token.

Get started

Tip

For a walk-through of how to use the Azure Data Lake Store connector, see Load data into Azure Data Lake Store.

You can use one of the following tools or SDKs to use the copy activity with a pipeline. Select a link for step-by-step instructions:

The following sections provide information about properties that are used to define Data Factory entities specific to Azure Data Lake Store.

Linked service properties

The following properties are supported for the Azure Data Lake Store linked service:

Property Description Required
type The type property must be set to AzureDataLakeStore. Yes
dataLakeStoreUri Information about the Azure Data Lake Store account. This information takes one of the following formats: https://[accountname].azuredatalakestore.net/webhdfs/v1 or adl://[accountname].azuredatalakestore.net/. Yes
subscriptionId The Azure subscription ID to which the Data Lake Store account belongs. Required for sink
resourceGroupName The Azure resource group name to which the Data Lake Store account belongs. Required for sink
connectVia The integration runtime to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is located in a private network. If this property isn't specified, the default Azure integration runtime is used. No

Use service principal authentication

To use service principal authentication, follow these steps.

  1. Register an application entity in Azure Active Directory and grant it access to Data Lake Store. For detailed steps, see Service-to-service authentication. Make note of the following values, which you use to define the linked service:

    • Application ID
    • Application key
    • Tenant ID
  2. Grant the service principal proper permission. See examples on how permission works in Data Lake Storage Gen1 from Access control in Azure Data Lake Storage Gen1.

    • As source: In Data explorer > Access, grant at least Execute permission for ALL upstream folders including the root, along with Read permission for the files to copy. You can choose to add to This folder and all children for recursive, and add as an access permission and a default permission entry. There's no requirement on account-level access control (IAM).
    • As sink: In Data explorer > Access, grant at least Execute permission for ALL upstream folders including the root, along with Write permission for the sink folder. You can choose to add to This folder and all children for recursive, and add as an access permission and a default permission entry. If you use an Azure integration runtime to copy (both source and sink are in the cloud), in IAM, grant at least the Reader role in order to let Data Factory detect the region for Data Lake Store. If you want to avoid this IAM role, explicitly create an Azure integration runtime with the location of Data Lake Store. For example, if your Data Lake Store is in West Europe, create an Azure integration runtime with location set to "West Europe." Associate them in the Data Lake Store linked service as shown in the following example.

The following properties are supported:

Property Description Required
servicePrincipalId Specify the application's client ID. Yes
servicePrincipalKey Specify the application's key. Mark this field as a SecureString to store it securely in Data Factory, or reference a secret stored in Azure Key Vault. Yes
tenant Specify the tenant information, such as domain name or tenant ID, under which your application resides. You can retrieve it by hovering the mouse in the upper-right corner of the Azure portal. Yes

Example:

{
    "name": "AzureDataLakeStoreLinkedService",
    "properties": {
        "type": "AzureDataLakeStore",
        "typeProperties": {
            "dataLakeStoreUri": "https://<accountname>.azuredatalakestore.net/webhdfs/v1",
            "servicePrincipalId": "<service principal id>",
            "servicePrincipalKey": {
                "type": "SecureString",
                "value": "<service principal key>"
            },
            "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
            "subscriptionId": "<subscription of ADLS>",
            "resourceGroupName": "<resource group of ADLS>"
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

Use managed identities for Azure resources authentication

A data factory can be associated with a managed identity for Azure resources, which represents this specific data factory. You can directly use this managed identity for Data Lake Store authentication, similar to using your own service principal. It allows this designated factory to access and copy data to or from Data Lake Store.

To use managed identities for Azure resources authentication, follow these steps.

  1. Retrieve the data factory managed identity information by copying the value of the "Service Identity Application ID" generated along with your factory.

  2. Grant the managed identity access to Data Lake Store. See examples on how permission works in Data Lake Storage Gen1 from Access control in Azure Data Lake Storage Gen1.

    • As source: In Data explorer > Access, grant at least Execute permission for ALL upstream folders including the root, along with Read permission for the files to copy. You can choose to add to This folder and all children for recursive, and add as an access permission and a default permission entry. There's no requirement on account-level access control (IAM).
    • As sink: In Data explorer > Access, grant at least Execute permission for ALL upstream folders including the root, along with Write permission for the sink folder. You can choose to add to This folder and all children for recursive, and add as an access permission and a default permission entry. If you use an Azure integration runtime to copy (both source and sink are in the cloud), in IAM, grant at least the Reader role in order to let Data Factory detect the region for Data Lake Store. If you want to avoid this IAM role, explicitly create an Azure integration runtime with the location of Data Lake Store. Associate them in the Data Lake Store linked service as shown in the following example.

In Azure Data Factory, you don't need to specify any properties besides the general Data Lake Store information in the linked service.

Example:

{
    "name": "AzureDataLakeStoreLinkedService",
    "properties": {
        "type": "AzureDataLakeStore",
        "typeProperties": {
            "dataLakeStoreUri": "https://<accountname>.azuredatalakestore.net/webhdfs/v1",
            "subscriptionId": "<subscription of ADLS>",
            "resourceGroupName": "<resource group of ADLS>"
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

Dataset properties

For a full list of sections and properties available for defining datasets, see the Datasets article.

Azure Data Factory support the following file formats. Refer to each article on format-based settings.

The following properties are supported for Azure Data Lake Store Gen1 under location settings in the format-based dataset:

Property Description Required
type The type property under location in the dataset must be set to AzureDataLakeStoreLocation. Yes
folderPath The path to a folder. If you want to use a wildcard to filter folders, skip this setting and specify it in activity source settings. No
fileName The file name under the given folderPath. If you want to use a wildcard to filter files, skip this setting and specify it in activity source settings. No

Example:

{
    "name": "DelimitedTextDataset",
    "properties": {
        "type": "DelimitedText",
        "linkedServiceName": {
            "referenceName": "<ADLS Gen1 linked service name>",
            "type": "LinkedServiceReference"
        },
        "schema": [ < physical schema, optional, auto retrieved during authoring > ],
        "typeProperties": {
            "location": {
                "type": "AzureDataLakeStoreLocation",
                "folderPath": "root/folder/subfolder"
            },
            "columnDelimiter": ",",
            "quoteChar": "\"",
            "firstRowAsHeader": true,
            "compressionCodec": "gzip"
        }
    }
}

Legacy dataset model

Note

The following dataset model is still supported as-is for backward compatibility. You are suggested to use the new model mentioned in above section going forward, and the ADF authoring UI has switched to generating the new model.

Property Description Required
type The type property of the dataset must be set to AzureDataLakeStoreFile. Yes
folderPath Path to the folder in Data Lake Store. If not specified, it points to the root.

Wildcard filter is supported. Allowed wildcards are * (matches zero or more characters) and ? (matches zero or single character). Use ^ to escape if your actual folder name has a wildcard or this escape char inside.

For example: rootfolder/subfolder/. See more examples in Folder and file filter examples.
No
fileName Name or wildcard filter for the files under the specified "folderPath". If you don't specify a value for this property, the dataset points to all files in the folder.

For filter, the wildcards allowed are * (matches zero or more characters) and ? (matches zero or single character).
- Example 1: "fileName": "*.csv"
- Example 2: "fileName": "???20180427.txt"
Use ^ to escape if your actual file name has a wildcard or this escape char inside.

When fileName isn't specified for an output dataset and preserveHierarchy isn't specified in the activity sink, the copy activity automatically generates the file name with the following pattern: "Data.[activity run ID GUID].[GUID if FlattenHierarchy].[format if configured].[compression if configured]", for example, "Data.0a405f8a-93ff-4c6f-b3be-f69616f1df7a.txt.gz". If you copy from a tabular source by using a table name instead of a query, the name pattern is "[table name].[format].[compression if configured]", for example, "MyTable.csv".
No
modifiedDatetimeStart Files filter based on the attribute Last Modified. The files are selected if their last modified time is within the time range between modifiedDatetimeStart and modifiedDatetimeEnd. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z".

The overall performance of data movement is affected by enabling this setting when you want to do file filter with huge amounts of files.

The properties can be NULL, which means no file attribute filter is applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal to the datetime value are selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value are selected.
No
modifiedDatetimeEnd Files filter based on the attribute Last Modified. The files are selected if their last modified time is within the time range between modifiedDatetimeStart and modifiedDatetimeEnd. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z".

The overall performance of data movement is affected by enabling this setting when you want to do file filter with huge amounts of files.

The properties can be NULL, which means no file attribute filter is applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal to the datetime value are selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value are selected.
No
format If you want to copy files as is between file-based stores (binary copy), skip the format section in both input and output dataset definitions.

If you want to parse or generate files with a specific format, the following file format types are supported: TextFormat, JsonFormat, AvroFormat, OrcFormat, and ParquetFormat. Set the type property under format to one of these values. For more information, see the Text format, JSON format, Avro format, Orc format, and Parquet format sections.
No (only for binary copy scenario)
compression Specify the type and level of compression for the data. For more information, see Supported file formats and compression codecs.
Supported types are GZip, Deflate, BZip2, and ZipDeflate.
Supported levels are Optimal and Fastest.
No

Tip

To copy all files under a folder, specify folderPath only.
To copy a single file with a particular name, specify folderPath with a folder part and fileName with a file name.
To copy a subset of files under a folder, specify folderPath with a folder part and fileName with a wildcard filter.

Example:

{
    "name": "ADLSDataset",
    "properties": {
        "type": "AzureDataLakeStoreFile",
        "linkedServiceName":{
            "referenceName": "<ADLS linked service name>",
            "type": "LinkedServiceReference"
        },
        "typeProperties": {
            "folderPath": "datalake/myfolder/",
            "fileName": "*",
            "modifiedDatetimeStart": "2018-12-01T05:00:00Z",
            "modifiedDatetimeEnd": "2018-12-01T06:00:00Z",
            "format": {
                "type": "TextFormat",
                "columnDelimiter": ",",
                "rowDelimiter": "\n"
            },
            "compression": {
                "type": "GZip",
                "level": "Optimal"
            }
        }
    }
}

Copy activity properties

For a full list of sections and properties available for defining activities, see Pipelines. This section provides a list of properties supported by Azure Data Lake Store source and sink.

Azure Data Lake Store as source

Azure Data Factory support the following file formats. Refer to each article on format-based settings.

The following properties are supported for Azure Data Lake Store Gen1 under storeSettings settings in the format-based copy source:

Property Description Required
type The type property under storeSettings must be set to AzureDataLakeStoreReadSetting. Yes
recursive Indicates whether the data is read recursively from the subfolders or only from the specified folder. When recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. Allowed values are true (default) and false. No
wildcardFolderPath The folder path with wildcard characters to filter source folders.
Allowed wildcards are * (matches zero or more characters) and ? (matches zero or single character). Use ^ to escape if your actual folder name has a wildcard or this escape char inside.
See more examples in Folder and file filter examples.
No
wildcardFileName The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files.
Allowed wildcards are * (matches zero or more characters) and ? (matches zero or single character). Use ^ to escape if your actual folder name has a wildcard or this escape char inside. See more examples in Folder and file filter examples.
Yes if fileName isn't specified in dataset
modifiedDatetimeStart Files filter based on the attribute Last Modified. The files are selected if their last modified time is within the time range between modifiedDatetimeStart and modifiedDatetimeEnd. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z".
The properties can be NULL, which means no file attribute filter is applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal to the datetime value are selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value are selected.
No
modifiedDatetimeEnd Same as above. No
maxConcurrentConnections The number of connections to connect to storage store concurrently. Specify only when you want to limit the concurrent connection to the data store. No

Example:

"activities":[
    {
        "name": "CopyFromADLSGen1",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<Delimited text input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "DelimitedTextSource",
                "formatSettings":{
                    "type": "DelimitedTextReadSetting",
                    "skipLineCount": 10
                },
                "storeSettings":{
                    "type": "AzureDataLakeStoreReadSetting",
                    "recursive": true,
                    "wildcardFolderPath": "myfolder*A",
                    "wildcardFileName": "*.csv"
                }
            },
            "sink": {
                "type": "<sink type>"
            }
        }
    }
]

Legacy source model

Note

The following copy source model is still supported as-is for backward compatibility. You are suggested to use the new model mentioned above going forward, and the ADF authoring UI has switched to generating the new model.

Property Description Required
type The type property of the copy activity source must be set to AzureDataLakeStoreSource. Yes
recursive Indicates whether the data is read recursively from the subfolders or only from the specified folder. When recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. Allowed values are true (default) and false. No
maxConcurrentConnections The number of connections to connect to the data store concurrently. Specify only when you want to limit the concurrent connection to the data store. No

Example:

"activities":[
    {
        "name": "CopyFromADLSGen1",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<ADLS Gen1 input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "AzureDataLakeStoreSource",
                "recursive": true
            },
            "sink": {
                "type": "<sink type>"
            }
        }
    }
]

Azure Data Lake Store as sink

Azure Data Factory support the following file formats. Refer to each article on format-based settings.

The following properties are supported for Azure Data Lake Store Gen1 under storeSettings settings in the format-based copy sink:

Property Description Required
type The type property under storeSettings must be set to AzureDataLakeStoreWriteSetting. Yes
copyBehavior Defines the copy behavior when the source is files from a file-based data store.

Allowed values are:
- PreserveHierarchy (default): Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.
- FlattenHierarchy: All files from the source folder are in the first level of the target folder. The target files have autogenerated names.
- MergeFiles: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name.
No
maxConcurrentConnections The number of connections to connect to the data store concurrently. Specify only when you want to limit the concurrent connection to the data store. No

Example:

"activities":[
    {
        "name": "CopyToADLSGen1",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<Parquet output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "<source type>"
            },
            "sink": {
                "type": "ParquetSink",
                "storeSettings":{
                    "type": "AzureDataLakeStoreWriteSetting",
                    "copyBehavior": "PreserveHierarchy"
                }
            }
        }
    }
]

Legacy sink model

Note

The following copy sink model is still supported as-is for backward compatibility. You are suggested to use the new model mentioned above going forward, and the ADF authoring UI has switched to generating the new model.

Property Description Required
type The type property of the copy activity sink must be set to AzureDataLakeStoreSink. Yes
copyBehavior Defines the copy behavior when the source is files from a file-based data store.

Allowed values are:
- PreserveHierarchy (default): Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.
- FlattenHierarchy: All files from the source folder are in the first level of the target folder. The target files have autogenerated names.
- MergeFiles: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, the file name is autogenerated.
No
maxConcurrentConnections The number of connections to connect to the data store concurrently. Specify only when you want to limit the concurrent connection to the data store. No

Example:

"activities":[
    {
        "name": "CopyToADLSGen1",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<ADLS Gen1 output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "<source type>"
            },
            "sink": {
                "type": "AzureDataLakeStoreSink",
                "copyBehavior": "PreserveHierarchy"
            }
        }
    }
]

Folder and file filter examples

This section describes the resulting behavior of the folder path and file name with wildcard filters.

folderPath fileName recursive Source folder structure and filter result (files in bold are retrieved)
Folder* (Empty, use default) false FolderA
    File1.csv
    File2.json
    Subfolder1
        File3.csv
        File4.json
        File5.csv
AnotherFolderB
    File6.csv
Folder* (Empty, use default) true FolderA
    File1.csv
    File2.json
    Subfolder1
        File3.csv
        File4.json
        File5.csv
AnotherFolderB
    File6.csv
Folder* *.csv false FolderA
    File1.csv
    File2.json
    Subfolder1
        File3.csv
        File4.json
        File5.csv
AnotherFolderB
    File6.csv
Folder* *.csv true FolderA
    File1.csv
    File2.json
    Subfolder1
        File3.csv
        File4.json
        File5.csv
AnotherFolderB
    File6.csv

Examples of behavior of the copy operation

This section describes the resulting behavior of the copy operation for different combinations of recursive and copyBehavior values.

recursive copyBehavior Source folder structure Resulting target
true preserveHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the same structure as the source:

Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5.
true flattenHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the following structure:

Folder1
    autogenerated name for File1
    autogenerated name for File2
    autogenerated name for File3
    autogenerated name for File4
    autogenerated name for File5
true mergeFiles Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the following structure:

Folder1
    File1 + File2 + File3 + File4 + File5 contents are merged into one file, with an autogenerated file name.
false preserveHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the following structure:

Folder1
    File1
    File2

Subfolder1 with File3, File4, and File5 aren't picked up.
false flattenHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the following structure:

Folder1
    autogenerated name for File1
    autogenerated name for File2

Subfolder1 with File3, File4, and File5 aren't picked up.
false mergeFiles Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the following structure:

Folder1
    File1 + File2 contents are merged into one file with autogenerated file name. autogenerated name for File1

Subfolder1 with File3, File4, and File5 aren't picked up.

Preserve ACLs to Data Lake Storage Gen2

If you want to replicate the access control lists (ACLs) along with data files when you upgrade from Data Lake Storage Gen1 to Data Lake Storage Gen2, see Preserve ACLs from Data Lake Storage Gen1.

Mapping data flow properties

Learn more about source transformation and sink transformation in the mapping data flow feature.

Lookup activity properties

To learn details about the properties, check Lookup activity.

GetMetadata activity properties

To learn details about the properties, check GetMetadata activity

Delete activity properties

To learn details about the properties, check Delete activity

Next steps

For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see supported data stores.