Use Azure Import/Export service to import data to Azure Files
This article provides step-by-step instructions on how to use the Azure Import/Export service to securely import large amounts of data into Azure Files. To import data, the service requires you to ship supported disk drives containing your data to an Azure datacenter.
The Import/Export service supports only import of Azure Files into Azure Storage. Exporting Azure Files is not supported.
Before you create an import job to transfer data into Azure Files, carefully review and complete the following list of prerequisites. You must:
- Have an active Azure subscription to use with Import/Export service.
- Have at least one Azure Storage account. See the list of Supported storage accounts and storage types for Import/Export service. For information on creating a new storage account, see How to Create a Storage Account.
- Have adequate number of disks of Supported types.
- Have a Windows system running a Supported OS version.
- Download the WAImportExport version 2 on the Windows system. Unzip to the default folder
waimportexport. For example,
- Have a FedEx/DHL account.
- The account must be valid, should have balance, and must have return shipping capabilities.
- Generate a tracking number for the export job.
- Every job should have a separate tracking number. Multiple jobs with the same tracking number are not supported.
- If you do not have a carrier account, go to:
Step 1: Prepare the drives
This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
Perform the following steps to prepare the drives.
Connect our disk drives to the Windows system via SATA connectors.
Create a single NTFS volume on each drive. Assign a drive letter to the volume. Do not use mountpoints.
Modify the dataset.csv file in the root folder where the tool resides. Depending on whether you want to import a file or folder or both, add entries in the dataset.csv file similar to the following examples.
To import a file: In the following example, the data to copy resides in the C: drive. Your file MyFile1.txt is copied to the root of the MyAzureFileshare1. If the MyAzureFileshare1 does not exist, it is created in the Azure Storage account. Folder structure is maintained.
To import a folder: All files and folders under MyFolder2 are recursively copied to fileshare. Folder structure is maintained.
Multiple entries can be made in the same file corresponding to folders or files that are imported.
Learn more about preparing the dataset CSV file.
Modify the driveset.csv file in the root folder where the tool resides. Add entries in the driveset.csv file similar to the following examples. The driveset file has the list of disks and corresponding drive letters so that the tool can correctly pick the list of disks to be prepared.
This example assumes that two disks are attached and basic NTFS volumes G:\ and H:\ are created. H:\is not encrypted while G: is already encrypted. The tool formats and encrypts the disk that hosts H:\ only (and not G:).
For a disk that is not encrypted: Specify Encrypt to enable BitLocker encryption on the disk.
For a disk that is already encrypted: Specify AlreadyEncrypted and supply the BitLocker key.
Multiple entries can be made in the same file corresponding to multiple drives. Learn more about preparing the driveset CSV file.
PrepImportoption to copy and prepare data to the disk drive. For the first copy session to copy directories and/or files with a new copy session, run the following command:
``` .\WAImportExport.exe PrepImport /j:<JournalFile> /id:<SessionId> [/logdir:<LogDirectory>] [/sk:<StorageAccountKey>] [/silentmode] [/InitialDriveSet:<driveset.csv>] DataSet:<dataset.csv> ```
An import example is shown below.
``` .\WAImportExport.exe PrepImport /j:JournalTest.jrn /id:session#1 /sk:************* /InitialDriveSet:driveset.csv /DataSet:dataset.csv /logdir:C:\logs ```
A journal file with name you provided with
/j:parameter, is created for every run of the command line. Each drive you prepare has a journal file that must be uploaded when you create the import job. Drives without journal files are not processed.
- Do not modify the data on the disk drives or the journal file after completing disk preparation.
For additional samples, go to Samples for journal files.
Step 2: Create an import job
Perform the following steps to create an import job in the Azure portal.
Log on to https://portal.azure.com/.
Go to All services > Storage > Import/export jobs.
Click Create Import/export Job.
Select Import into Azure.
Enter a descriptive name for the import job. Use this name to track your jobs while they are in progress and once they are completed.
- This name may contain only lowercase letters, numbers, hyphens, and underscores.
- The name must start with a letter, and may not contain spaces.
Select a subscription.
Select a resource group.
In Job details:
Upload the journal files that you created during the preceding Step 1: Prepare the drives.
Select the storage account that the data will be imported into.
The dropoff location is automatically populated based on the region of the storage account selected.
In Return shipping info:
Select the carrier from the drop-down list.
Enter a valid carrier account number that you have created with that carrier. Microsoft uses this account to ship the drives back to you once your import job is complete.
Provide a complete and valid contact name, phone, email, street address, city, zip, state/province and country/region.
Instead of specifying an email address for a single user, provide a group email. This ensures that you receive notifications even if an admin leaves.
In the Summary:
Provide the Azure datacenter shipping address for shipping disks back to Azure. Ensure that the job name and the full address are mentioned on the shipping label.
Click OK to complete import job creation.
Step 3: Ship the drives to the Azure datacenter
FedEx, UPS, or DHL can be used to ship the package to Azure datacenter.
Provide a valid FedEx, UPS, or DHL carrier account number that Microsoft will use to ship the drives back.
When shipping your packages, you must follow the Microsoft Azure Service Terms.
Properly package yours disks to avoid potential damage and delays in processing.
Step 4: Update the job with tracking information
After shipping the disks, return to the Import/Export page on the Azure portal.
If the tracking number is not updated within 2 weeks of creating the job, the job expires.
To update the tracking number, perform the following steps.
- Select and click the job.
- Click Update job status and tracking info once drives are shipped.
- Select the checkbox against Mark as shipped.
- Provide the Carrier and Tracking number.
- Track the job progress on the portal dashboard. For a description of each job state, go to View your job status.
Step 5: Verify data upload to Azure
Track the job to completion. Once the job is complete, verify that your data has uploaded to Azure. Delete the on-premises data only after you have verified that upload was successful.
Samples for journal files
To add more drives, create a new driveset file and run the command as below.
For subsequent copy sessions to the different disk drives than specified in InitialDriveset .csv file, specify a new driveset .csv file and provide it as a value to the parameter
AdditionalDriveSet. Use the same journal file name and provide a new session ID. The format of AdditionalDriveset CSV file is same as InitialDriveSet format.
``` WAImportExport.exe PrepImport /j:<JournalFile> /id:<SessionId> /AdditionalDriveSet:<driveset.csv> ```
An import example is shown below.
``` WAImportExport.exe PrepImport /j:JournalTest.jrn /id:session#3 /AdditionalDriveSet:driveset-2.csv ```
To add additional data to the same driveset, use the PrepImport command for subsequent copy sessions to copy additional files/directory.
For subsequent copy sessions to the same hard disk drives specified in InitialDriveset.csv file, specify the same journal file name and provide a new session ID; there is no need to provide the storage account key.
``` WAImportExport PrepImport /j:<JournalFile> /id:<SessionId> /j:<JournalFile> /id:<SessionId> [/logdir:<LogDirectory>] DataSet:<dataset.csv> ```
An import example is shown below.
``` WAImportExport.exe PrepImport /j:JournalTest.jrn /id:session#2 /DataSet:dataset-2.csv ```
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog.