Tutorial: Transfer data to Azure Files with Azure Import/Export
This article provides step-by-step instructions on how to use the Azure Import/Export service to securely import large amounts of data into Azure Files. To import data, the service requires you to ship supported disk drives containing your data to an Azure datacenter.
The Import/Export service supports only import of Azure Files into Azure Storage. Exporting Azure Files is not supported.
In this tutorial, you learn how to:
- Prerequisites to import data to Azure Files
- Step 1: Prepare the drives
- Step 2: Create an import job
- Step 3: Ship the drives to Azure datacenter
- Step 4: Update the job with tracking information
- Step 5: Verify data upload to Azure
Before you create an import job to transfer data into Azure Files, carefully review and complete the following list of prerequisites. You must:
- Have an active Azure subscription to use with Import/Export service.
- Have at least one Azure Storage account. See the list of Supported storage accounts and storage types for Import/Export service.
- Consider configuring large file shares on the storage account. During imports to Azure Files, if a file share doesn't have enough free space, auto splitting the data to multiple Azure file shares is no longer supported, and the copy will fail. For instructions, see Configure large file shares on a storage account.
- For information on creating a new storage account, see How to create a storage account.
- Have an adequate number of disks of supported types.
- Have a Windows system running a supported OS version.
- Download the current release of the Azure Import/Export version 2 tool, for files, on the Windows system:
- Download WAImportExport version 2. The current version is 126.96.36.1990.
- Unzip to the default folder
WaImportExportV2. For example,
- Have a FedEx/DHL account. If you want to use a carrier other than FedEx/DHL, contact Azure Data Box Operations team at
- The account must be valid, should have balance, and must have return shipping capabilities.
- Generate a tracking number for the export job.
- Every job should have a separate tracking number. Multiple jobs with the same tracking number are not supported.
- If you don't have a carrier account, go to:
Step 1: Prepare the drives
This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
Do the following steps to prepare the drives.
Connect your disk drives to the Windows system via SATA connectors.
Create a single NTFS volume on each drive. Assign a drive letter to the volume. Do not use mountpoints.
Modify the dataset.csv file in the root folder where the tool is. Depending on whether you want to import a file or folder or both, add entries in the dataset.csv file similar to the following examples.
To import a file: In the following example, the data to copy is on the F: drive. Your file MyFile1.txt is copied to the root of the MyAzureFileshare1. If the MyAzureFileshare1 does not exist, it's created in the Azure Storage account. Folder structure is maintained.
To import a folder: All files and folders under MyFolder2 are recursively copied to the fileshare. Folder structure is maintained. If you import a file with the same name as an existing file in the destination folder, the imported file will overwrite that file.
The /Disposition parameter, which let you choose what to do when you import a file that already exists in earlier versions of the tool, is not supported in Azure Import/Export version 188.8.131.520. In the earlier tool versions, an imported file with the same name as an existing file was renamed by default.
Multiple entries can be made in the same file corresponding to folders or files that are imported.
Modify the driveset.csv file in the root folder where the tool is. Add entries in the driveset.csv file similar to the following examples. The driveset file has the list of disks and corresponding drive letters so that the tool can correctly pick the list of disks to be prepared.
This example assumes that two disks are attached and basic NTFS volumes G:\ and H:\ are created. H:\is not encrypted while G: is already encrypted. The tool formats and encrypts the disk that hosts H:\ only (and not G:).
For a disk that is not encrypted: Specify Encrypt to enable BitLocker encryption on the disk.
For a disk that is already encrypted: Specify AlreadyEncrypted and supply the BitLocker key.
Multiple entries can be made in the same file corresponding to multiple drives. Learn more about preparing the driveset CSV file.
PrepImportoption to copy and prepare data to the disk drive. For the first copy session to copy directories and/or files with a new copy session, run the following command:
.\WAImportExport.exe PrepImport /j:<JournalFile> /id:<SessionId> [/logdir:<LogDirectory>] [/silentmode] [/InitialDriveSet:<driveset.csv>]/DataSet:<dataset.csv>
An import example is shown below.
.\WAImportExport.exe PrepImport /j:JournalTest.jrn /id:session#1 /InitialDriveSet:driveset.csv /DataSet:dataset.csv /logdir:C:\logs
A journal file with name you provided with
/j:parameter, is created for every run of the command line. Each drive you prepare has a journal file that must be uploaded when you create the import job. Drives without journal files aren't processed.
Do not modify the journal files or the data on the disk drives, and don't reformat any disks, after completing disk preparation.
For additional samples, go to Samples for journal files.
Step 2: Create an import job
Do the following steps to create an import job in the Azure portal.
Log on to https://portal.azure.com/.
Search for import/export jobs.
Select + New.
- Select a subscription.
- Select a resource group, or select Create new and create a new one.
- Enter a descriptive name for the import job. Use the name to track the progress of your jobs.
- The name may contain only lowercase letters, numbers, and hyphens.
- The name must start with a letter, and may not contain spaces.
- Select Import into Azure.
Select Next: Job details > to proceed.
In Job details:
Upload the journal files that you created during the preceding Step 1: Prepare the drives.
Select the destination Azure region for the order.
Select the storage account for the import.
The dropoff location is automatically populated based on the region of the storage account selected.
If you don't want to save a verbose log, clear the Save verbose log in the 'waimportexport' blob container option.
Select Next: Shipping > to proceed.
Select the carrier from the drop-down list. If you want to use a carrier other than FedEx/DHL, choose an existing option from the dropdown. Contact Azure Data Box Operations team at
firstname.lastname@example.org the information about the carrier you plan to use.
Enter a valid carrier account number that you have created with that carrier. Microsoft uses this account to ship the drives back to you once your import job is complete.
Provide a complete and valid contact name, phone, email, street address, city, zip, state/province and country/region.
Instead of specifying an email address for a single user, provide a group email. This ensures that you receive notifications even if an admin leaves.
Select Review + create to proceed.
In the order summary:
Review the Terms, and then select "I acknowledge that all the information provided is correct and agree to the terms and conditions." Validation is then performed.
Review the job information provided in the summary. Make a note of the job name and the Azure datacenter shipping address to ship disks back to Azure. This information is used later on the shipping label.
Step 3: Ship the drives to the Azure datacenter
FedEx, UPS, or DHL can be used to ship the package to Azure datacenter. If you want to use a carrier other than FedEx/DHL, contact Azure Data Box Operations team at
- Provide a valid FedEx, UPS, or DHL carrier account number that Microsoft will use to ship the drives back.
- When shipping your packages, you must follow the Microsoft Azure Service Terms.
- Properly package yours disks to avoid potential damage and delays in processing.
Step 4: Update the job with tracking information
After shipping the disks, return to the Import/Export page on the Azure portal.
If the tracking number is not updated within 2 weeks of creating the job, the job expires.
To update the tracking number, perform the following steps.
- Select and click the job.
- Click Update job status and tracking info once drives are shipped.
- Select the checkbox against Mark as shipped.
- Provide the Carrier and Tracking number.
- Track the job progress on the portal dashboard. For a description of each job state, go to View your job status.
You can only cancel a job while it's in Creating state. After you provide tracking details, the job status changes to Shipping, and the job can't be canceled.
Step 5: Verify data upload to Azure
Track the job to completion. Once the job is complete, verify that your data has uploaded to Azure. Check your copy logs for failures. For more information, see Review copy logs. Delete the on-premises data only after you verify that upload was successful.
In the latest version of the Azure Import/Export tool for files (184.108.40.2060), if a file share doesn't have enough free space, the data is no longer auto split to multiple Azure file shares. Instead, the copy fails, and you'll be contacted by Support. You'll need to either configure large file shares on the storage account or move around some data to make space in the share. For more information, see Configure large file shares on a storage account.
Samples for journal files
To add more drives, create a new driveset file and run the command as below.
For subsequent copy sessions to disk drives other than those specified in the InitialDriveset .csv file, specify a new driveset .csv file and provide it as a value to the parameter
AdditionalDriveSet. Use the same journal file name and provide a new session ID. The format of AdditionalDriveset CSV file is same as InitialDriveSet format.
WAImportExport.exe PrepImport /j:<JournalFile> /id:<SessionId> /AdditionalDriveSet:<driveset.csv>
An import example is shown below.
WAImportExport.exe PrepImport /j:JournalTest.jrn /id:session#3 /AdditionalDriveSet:driveset-2.csv
To add additional data to the same driveset, use the PrepImport command for subsequent copy sessions to copy additional files/directory.
For subsequent copy sessions to the same hard disk drives specified in InitialDriveset.csv file, specify the same journal file name and provide a new session ID; there is no need to provide the storage account key.
WAImportExport PrepImport /j:<JournalFile> /id:<SessionId> /j:<JournalFile> /id:<SessionId> [/logdir:<LogDirectory>] DataSet:<dataset.csv>
An import example is shown below.
WAImportExport.exe PrepImport /j:JournalTest.jrn /id:session#2 /DataSet:dataset-2.csv