Copy data to or from Azure Data Lake Store by using Azure Data Factory

This article outlines how to use the Copy Activity in Azure Data Factory to copy data to and from Azure Data Lake Store. It builds on the copy activity overview article that presents a general overview of copy activity.

Note

This article applies to version 2 of Data Factory, which is currently in preview. If you are using version 1 of the Data Factory service, which is generally available (GA), see Azure Data Lake Store connector in V1.

Supported capabilities

You can copy data from any supported source data store to Azure Data Lake Store, or copy data from Azure Data Lake Store to any supported sink data store. For a list of data stores that are supported as sources or sinks by the copy activity, see the Supported data stores table.

Specifically, this Azure Data Lake Store connector supports:

Get started

You can create a pipeline with the copy activity by using one of the following tools or SDKs. Select a link to go to a tutorial with step-by-step instructions to create a pipeline with a copy activity.

The following sections provide details about properties that are used to define Data Factory entities specific to Azure Data lake Store.

Linked service properties

The following properties are supported for Azure Data Lake Store linked service:

Property Description Required
type The type property must be set to AzureDataLakeStore. Yes
dataLakeStoreUri Information about the Azure Data Lake Store account. This information takes one of the following formats: https://[accountname].azuredatalakestore.net/webhdfs/v1 or adl://[accountname].azuredatalakestore.net/. Yes
tenant Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the upper-right corner of the Azure portal. Yes
subscriptionId Azure subscription ID to which the Data Lake Store account belongs. Required for sink
resourceGroupName Azure resource group name to which the Data Lake Store account belongs. Required for sink
connectVia The Integration Runtime to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. No

Refer to the following sections on more properties and JSON samples for different authentication types respectively:

Using service principal authentication

To use service principal authentication, register an application entity in Azure Active Directory (Azure AD) and grant it access to Data Lake Store. For detailed steps, see Service-to-service authentication. Make note of the following values, which you use to define the linked service:

  • Application ID
  • Application key
  • Tenant ID

Important

Make sure you grant the service principal proper permission in Azure Data Lake Store:

  • As source, in Data explorer -> Access, grant at least Read + Execute permission to list and copy the files in folder/subfolders, or Read permission to copy a single file; and choose to add as an access permission and a default permission entry. No requirement on account level access control (IAM).
  • As sink, in Data explorer -> Access, grant at least Write + Execute permission to create child items in the folder, and choose to add as an access permission and a default permission entry. If you use Azure IR to copy (both source and sink are in cloud), in Access control (IAM), grant at least Reader role in order to let Data Factory detect Data Lake Store's region. If you want to avoid this IAM role, explicitly create an Azure IR with the location of your Data Lake Store, and associate in the Data Lake Store linked service as the following example.

The following properties are supported:

Property Description Required
servicePrincipalId Specify the application's client ID. Yes
servicePrincipalKey Specify the application's key. Mark this field as a SecureString to store it securely in Data Factory, or reference a secret stored in Azure Key Vault. Yes

Example:

{
    "name": "AzureDataLakeStoreLinkedService",
    "properties": {
        "type": "AzureDataLakeStore",
        "typeProperties": {
            "dataLakeStoreUri": "https://<accountname>.azuredatalakestore.net/webhdfs/v1",
            "servicePrincipalId": "<service principal id>",
            "servicePrincipalKey": {
                "type": "SecureString",
                "value": "<service principal key>"
            },
            "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
            "subscriptionId": "<subscription of ADLS>",
            "resourceGroupName": "<resource group of ADLS>"
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

Using managed service identity authentication

A data factory can be associated with a managed service identity, which represents this specific data factory. You can directly use this service identity for Data Lake Store authentication similar to using your own service principal. It allows this designated factory to access and copy data from/to your Data Lake Store.

To use managed service identity (MSI) authentication:

  1. Retrieve data factory service identity by copying the value of "SERVICE IDENTITY APPLICATION ID" generated along with your factory.
  2. Grant the service identity access to Data Lake Store the same way you do for service principal following below notes.

Important

Make sure you grant the data factory service identity proper permission in Azure Data Lake Store:

  • As source, in Data explorer -> Access, grant at least Read + Execute permission to list and copy the files in folder/subfolders, or Read permission to copy a single file; and choose to add as an access permission and a default permission entry. No requirement on account level access control (IAM).
  • As sink, in Data explorer -> Access, grant at least Write + Execute permission to create child items in the folder, and choose to add as an access permission and a default permission entry. If you use Azure IR to copy (both source and sink are in cloud), in Access control (IAM), grant at least Reader role in order to let Data Factory detect Data Lake Store's region. If you want to avoid this IAM role, explicitly create an Azure IR with the location of your Data Lake Store, and associate in the Data Lake Store linked service as the following example.

In Azure Data Factory, you don't need to specify any properties besides the general Data Lake Store information in linked service.

Example:

{
    "name": "AzureDataLakeStoreLinkedService",
    "properties": {
        "type": "AzureDataLakeStore",
        "typeProperties": {
            "dataLakeStoreUri": "https://<accountname>.azuredatalakestore.net/webhdfs/v1",
            "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
            "subscriptionId": "<subscription of ADLS>",
            "resourceGroupName": "<resource group of ADLS>"
        },
        "connectVia": {
            "referenceName": "<name of Integration Runtime>",
            "type": "IntegrationRuntimeReference"
        }
    }
}

Dataset properties

For a full list of sections and properties available for defining datasets, see the datasets article. This section provides a list of properties supported by Azure Data Lake Store dataset.

To copy data to/from Azure Data Lake Store, set the type property of the dataset to AzureDataLakeStoreFile. The following properties are supported:

Property Description Required
type The type property of the dataset must be set to: AzureDataLakeStoreFile Yes
folderPath Path to the container and folder in the file storage. Example: rootfolder/subfolder/ Yes
fileName Specify the name of the file in the folderPath if you want to copy to/from a specific file. If you do not specify any value for this property, the dataset points to all files in the folder.

When fileName is not specified for an output dataset and preserveHierarchy is not specified in activity sink, copy activity automatically generates the file name with the following format: Data.[activity run id GUID].[GUID if FlattenHierarchy].[format if configured].[compression if configured]. For example: Data.0a405f8a-93ff-4c6f-b3be-f69616f1df7a.txt.gz.
No
format If you want to copy files as-is between file-based stores (binary copy), skip the format section in both input and output dataset definitions.

If you want to parse or generate files with a specific format, the following file format types are supported: TextFormat, JsonFormat, AvroFormat, OrcFormat, ParquetFormat. Set the type property under format to one of these values. For more information, see Text Format, Json Format, Avro Format, Orc Format, and Parquet Format sections.
No (only for binary copy scenario)
compression Specify the type and level of compression for the data. For more information, see Supported file formats and compression codecs.
Supported types are: GZip, Deflate, BZip2, and ZipDeflate.
Supported levels are: Optimal and Fastest.
No

Example:

{
    "name": "ADLSDataset",
    "properties": {
        "type": "AzureDataLakeStoreFile",
        "linkedServiceName":{
            "referenceName": "<ADLS linked service name>",
            "type": "LinkedServiceReference"
        },
        "typeProperties": {
            "folderPath": "datalake/myfolder/",
            "fileName": "myfile.csv.gz",
            "format": {
                "type": "TextFormat",
                "columnDelimiter": ",",
                "rowDelimiter": "\n"
            },
            "compression": {
                "type": "GZip",
                "level": "Optimal"
            }
        }
    }
}

Copy activity properties

For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by Azure Data Lake source and sink.

Azure Data Lake Store as source

To copy data from Azure Data Lake Store, set the source type in the copy activity to AzureDataLakeStoreSource. The following properties are supported in the copy activity source section:

Property Description Required
type The type property of the copy activity source must be set to: AzureDataLakeStoreSource Yes
recursive Indicates whether the data is read recursively from the sub folders or only from the specified folder. Note when recursive is set to true and sink is file-based store, empty folder/sub-folder will not be copied/created at sink.
Allowed values are: true (default), false
No

Example:

"activities":[
    {
        "name": "CopyFromADLS",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<ADLS input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "AzureDataLakeStoreSource",
                "recursive": true
            },
            "sink": {
                "type": "<sink type>"
            }
        }
    }
]

Azure Data Lake Store as sink

To copy data to Azure Data Lake Store, set the sink type in the copy activity to AzureDataLakeStoreSink. The following properties are supported in the sink section:

Property Description Required
type The type property of the copy activity sink must be set to: AzureDataLakeStoreSink Yes
copyBehavior Defines the copy behavior when the source is files from file-based data store.

Allowed values are:
- PreserveHierarchy (default): preserves the file hierarchy in the target folder. The relative path of source file to source folder is identical to the relative path of target file to target folder.
- FlattenHierarchy: all files from the source folder are in the first level of target folder. The target files have auto generated name.
- MergeFiles: merges all files from the source folder to one file. If the File/Blob Name is specified, the merged file name would be the specified name; otherwise, would be auto-generated file name.
No

Example:

"activities":[
    {
        "name": "CopyToADLS",
        "type": "Copy",
        "inputs": [
            {
                "referenceName": "<input dataset name>",
                "type": "DatasetReference"
            }
        ],
        "outputs": [
            {
                "referenceName": "<ADLS output dataset name>",
                "type": "DatasetReference"
            }
        ],
        "typeProperties": {
            "source": {
                "type": "<source type>"
            },
            "sink": {
                "type": "AzureDataLakeStoreSink",
                "copyBehavior": "PreserveHierarchy"
            }
        }
    }
]

recursive and copyBehavior examples

This section describes the resulting behavior of the Copy operation for different combinations of recursive and copyBehavior values.

recursive copyBehavior Source folder structure Resulting target
true preserveHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target folder Folder1 is created with the same structure as the source:

Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5.
true flattenHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the following structure:

Folder1
    auto-generated name for File1
    auto-generated name for File2
    auto-generated name for File3
    auto-generated name for File4
    auto-generated name for File5
true mergeFiles Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target Folder1 is created with the following structure:

Folder1
    File1 + File2 + File3 + File4 + File 5 contents are merged into one file with auto-generated file name
false preserveHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target folder Folder1 is created with the following structure

Folder1
    File1
    File2

Subfolder1 with File3, File4, and File5 are not picked up.
false flattenHierarchy Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target folder Folder1 is created with the following structure

Folder1
    auto-generated name for File1
    auto-generated name for File2

Subfolder1 with File3, File4, and File5 are not picked up.
false mergeFiles Folder1
    File1
    File2
    Subfolder1
        File3
        File4
        File5
The target folder Folder1 is created with the following structure

Folder1
    File1 + File2 contents are merged into one file with auto-generated file name. auto-generated name for File1

Subfolder1 with File3, File4, and File5 are not picked up.

Next steps

For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see supported data stores.