CREATE EXTERNAL DATA SOURCE (Transact-SQL)
Creates an external data source for querying using SQL Server, Azure SQL Database, Azure Synapse Analytics, Analytics Platform System (Analytics Platform System (PDW)), or Azure SQL Edge.
This article provides the syntax, arguments, remarks, permissions, and examples for whichever SQL product you choose.
Select a product
In the following row, select the product name you're interested in, and only that product’s information is displayed.
* SQL Server *
Overview: SQL Server
Applies to:
SQL Server 2016 (13.x) and later
Creates an external data source for PolyBase queries. External data sources are used to establish connectivity and support these primary use cases:
- Data virtualization and data load using PolyBase
- Bulk load operations using
BULK INSERTorOPENROWSET
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
Syntax for SQL Server 2016
Note
This syntax varies between versions of SQL Server. Use the version selector dropdown to choose the appropriate version of SQL Server. To view the features of SQL Server 2019, see CREATE EXTERNAL DATA SOURCE.
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH
( [ LOCATION = '<prefix>://<path>[:<port>]' ]
[ [ , ] CREDENTIAL = <credential_name> ]
[ [ , ] TYPE = { HADOOP } ]
[ [ , ] RESOURCE_MANAGER_LOCATION = '<resource_manager>[:<port>]' )
[ ; ]
Arguments
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the database in SQL Server.
LOCATION = '<prefix>://<path[:port]>'
Provides the connectivity protocol and path to the external data source.
| External Data Source | Location prefix | Location path | Supported locations by product / service |
|---|---|---|---|
| Cloudera CDH or Hortonworks HDP | hdfs |
<Namenode>[:port] |
Starting with SQL Server 2016 (13.x) |
| Azure Storage account(V2) | wasb[s] |
<container>@<storage_account>.blob.core.windows.net |
Starting with SQL Server 2016 (13.x) Hierarchical Namespace not supported |
Location path:
<Namenode>= the machine name, name service URI, or IP address of theNamenodein the Hadoop cluster. PolyBase must resolve any DNS names used by the Hadoop cluster.port= The port that the external data source is listening on. In Hadoop, the port can be found using thefs.defaultFSconfiguration parameter. The default is 8020.<container>= the container of the storage account holding the data. Root containers are read-only, data can't be written back to the container.<storage_account>= the storage account name of the Azure resource.<server_name>= the host name.<instance_name>= the name of the SQL Server named instance. Used if you have SQL Server Browser Service running on the target instance.
Additional notes and guidance when setting the location:
- The SQL Server Database Engine doesn't verify the existence of the external data source when the object is created. To validate, create an external table using the external data source.
- Use the same external data source for all tables when querying Hadoop to ensure consistent querying semantics.
wasbsis optional but recommended for accessing Azure Storage Accounts as data will be sent using a secure TLS/SSL connection.- To ensure successful PolyBase queries during a Hadoop
Namenodefail-over, consider using a virtual IP address for theNamenodeof the Hadoop cluster. If you don't, execute an ALTER EXTERNAL DATA SOURCE command to point to the new location.
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
CREDENTIAL is only required if the data has been secured. CREDENTIAL isn't required for data sets that allow anonymous access.
To create a database scoped credential, see CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL).
TYPE = [ HADOOP ]
Specifies the type of the external data source being configured. This parameter isn't always required, and should only be specified as HADOOP when connecting to Cloudera CDH, Hortonworks HDP, or an Azure Storage account.
Note
TYPE should be set to HADOOP even when accessing Azure Storage.
For an example of using TYPE = HADOOP to load data from an Azure Storage account, see Create external data source to access data in Azure Storage using the wasb:// interface
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]'
Configure this optional value when connecting to Cloudera CDH, Hortonworks HDP, or an Azure Storage account only.
When the RESOURCE_MANAGER_LOCATION is defined, the query optimizer will make a cost-based decision to improve performance. A MapReduce job can be used to push down the computation to Hadoop. Specifying the RESOURCE_MANAGER_LOCATION can significantly reduce the volume of data transferred between Hadoop and SQL Server, which can lead to improved query performance.
If the Resource Manager isn't specified, pushing compute to Hadoop is disabled for PolyBase queries.
If the port isn't specified, the default value is chosen using the current setting for 'hadoop connectivity' configuration.
| Hadoop Connectivity | Default Resource Manager Port |
|---|---|
| 1 | 50300 |
| 2 | 50300 |
| 3 | 8021 |
| 4 | 8032 |
| 5 | 8050 |
| 6 | 8032 |
| 7 | 8050 |
| 8 | 8032 |
For a complete list of supported Hadoop versions, see PolyBase Connectivity Configuration (Transact-SQL).
Important
The RESOURCE_MANAGER_LOCATION value is not validated when you create the external data source. Entering an incorrect value may cause query failure at execution time whenever push-down is attempted as the provided value would not be able to resolve.
Create external data source to reference Hadoop with push-down enabled provides a concrete example and further guidance.
Permissions
Requires CONTROL permission on database in SQL Server.
Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.
Security
PolyBase supports proxy based authentication for most external data sources. Create a database scoped credential to create the proxy account.
When you connect to the storage or data pool in a SQL Server big data cluster, the user's credentials are passed through to the back-end system. Create logins in the data pool itself to enable pass through authentication.
Examples
Important
For information on how to install and enable PolyBase, see Install PolyBase on Windows
A. Create external data source to reference Hadoop
To create an external data source to reference your Hortonworks HDP or Cloudera CDH Hadoop cluster, specify the machine name, or IP address of the Hadoop Namenode and port.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
TYPE = HADOOP
) ;
B. Create external data source to reference Hadoop with push-down enabled
Specify the RESOURCE_MANAGER_LOCATION option to enable push-down computation to Hadoop for PolyBase queries. Once enabled, PolyBase makes a cost-based decision to determine whether the query computation should be pushed to Hadoop.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8020' ,
TYPE = HADOOP ,
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
) ;
C. Create external data source to reference Kerberos-secured Hadoop
To verify if the Hadoop cluster is Kerberos-secured, check the value of hadoop.security.authentication property in Hadoop core-site.xml. To reference a Kerberos-secured Hadoop cluster, you must specify a database scoped credential that contains your Kerberos username and password. The database master key is used to encrypt the database scoped credential secret.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Kerberos user name and password.
CREATE DATABASE SCOPED CREDENTIAL HadoopUser1
WITH
IDENTITY = '<hadoop_user_name>',
SECRET = '<hadoop_password>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
CREDENTIAL = HadoopUser1 ,
TYPE = HADOOP ,
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
);
D. Create external data source to access data in Azure Storage using the wasb:// interface
In this example, the external data source is an Azure V2 Storage account named logs. The storage container is called daily. The Azure Storage external data source is for data transfer only. It doesn't support predicate push-down. Hierarchical namespaces are not supported when accessing data via the wasb:// interface.
This example shows how to create the database scoped credential for authentication to an Azure V2 Storage account. Specify the Azure Storage account key in the database credential secret. You can specify any string in database scoped credential identity as it isn't used during authentication to Azure Storage. Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
In SQL Server 2016, TYPE should be set to HADOOP even when accessing Azure Storage.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = '<my_account>' ,
SECRET = '<azure_storage_account_key>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
( LOCATION = 'wasbs://daily@logs.blob.core.windows.net/' ,
CREDENTIAL = AzureStorageCredential ,
TYPE = HADOOP
) ;
See also
Overview: SQL Server
Applies to:
SQL Server 2016 (13.x) and later
Creates an external data source for PolyBase queries. External data sources are used to establish connectivity and support these primary use cases:
- Data virtualization and data load using PolyBase
- Bulk load operations using
BULK INSERTorOPENROWSET
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
Syntax for SQL Server 2017
Note
This syntax varies between versions of SQL Server. Use the version selector dropdown to choose the appropriate version of SQL Server. To view the features of SQL Server 2019, see CREATE EXTERNAL DATA SOURCE.
Note
This syntax varies between versions of SQL Server. Use the version selector dropdown to choose the appropriate version of SQL Server. To view the features of SQL Server 2019, see CREATE EXTERNAL DATA SOURCE.
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH
( [ LOCATION = '<prefix>://<path>[:<port>]' ]
[ [ , ] CREDENTIAL = <credential_name> ]
[ [ , ] TYPE = { HADOOP | BLOB_STORAGE } ]
[ [ , ] RESOURCE_MANAGER_LOCATION = '<resource_manager>[:<port>]' )
[ ; ]
Arguments
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the database in SQL Server.
LOCATION = '<prefix>://<path[:port]>'
Provides the connectivity protocol and path to the external data source.
| External Data Source | Location prefix | Location path | Supported locations by product / service |
|---|---|---|---|
| Cloudera CDH or Hortonworks HDP | hdfs |
<Namenode>[:port] |
Starting with SQL Server 2016 (13.x) |
| Azure Storage account(V2) | wasb[s] |
<container>@<storage_account>.blob.core.windows.net |
Starting with SQL Server 2016 (13.x) Hierarchical Namespace not supported |
| Bulk Operations | https |
<storage_account>.blob.core.windows.net/<container> |
Starting with SQL Server 2017 (14.x) |
Location path:
<Namenode>= the machine name, name service URI, or IP address of theNamenodein the Hadoop cluster. PolyBase must resolve any DNS names used by the Hadoop cluster.port= The port that the external data source is listening on. In Hadoop, the port can be found using thefs.defaultFSconfiguration parameter. The default is 8020.<container>= the container of the storage account holding the data. Root containers are read-only, data can't be written back to the container.<storage_account>= the storage account name of the Azure resource.<server_name>= the host name.<instance_name>= the name of the SQL Server named instance. Used if you have SQL Server Browser Service running on the target instance.
Additional notes and guidance when setting the location:
- The SQL Server Database Engine doesn't verify the existence of the external data source when the object is created. To validate, create an external table using the external data source.
- Use the same external data source for all tables when querying Hadoop to ensure consistent querying semantics.
- Specify the
Driver={<Name of Driver>}when connecting viaODBC. wasbsis optional but recommended for accessing Azure Storage Accounts as data will be sent using a secure TLS/SSL connection.- To ensure successful PolyBase queries during a Hadoop
Namenodefail-over, consider using a virtual IP address for theNamenodeof the Hadoop cluster. If you don't, execute an ALTER EXTERNAL DATA SOURCE command to point to the new location.
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Additional notes and guidance when creating a credential:
CREDENTIALis only required if the data has been secured.CREDENTIALisn't required for data sets that allow anonymous access.- When the
TYPE=BLOB_STORAGE, the credential must be created usingSHARED ACCESS SIGNATUREas the identity. Furthermore, the SAS token should be configured as follows:- Exclude the leading
?when configured as the secret - Have at least read permission on the file that should be loaded (for example
srt=o&sp=r) - Use a valid expiration period (all dates are in UTC time).
TYPE=BLOB_STORAGEis only permitted for bulk operations; you cannot create external tables for an external data source withTYPE=BLOB_STORAGE.
- Exclude the leading
- Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
- When
TYPE=HADOOPthe credential must be created using the storage account key as theSECRET.
For an example of using a CREDENTIAL with SHARED ACCESS SIGNATURE and TYPE = BLOB_STORAGE, see Create an external data source to execute bulk operations and retrieve data from Azure Storage into SQL Database
To create a database scoped credential, see CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL).
TYPE = [ HADOOP | BLOB_STORAGE ]
Specifies the type of the external data source being configured. This parameter isn't always required, and should only be specified when connecting to Cloudera CDH, Hortonworks HDP, an Azure Storage account, or an Azure Data Lake Storage Gen2.
- Use
HADOOPwhen the external data source is Cloudera CDH, Hortonworks HDP, an Azure Storage account, or an Azure Data Lake Storage Gen2. - Use
BLOB_STORAGEwhen executing bulk operations from Azure Storage account using BULK INSERT or OPENROWSET. Introduced with SQL Server 2017 (14.x). UseHADOOPwhen intending to CREATE EXTERNAL TABLE against Azure Storage.
Note
TYPE should be set to HADOOP even when accessing Azure Storage.
For an example of using TYPE = HADOOP to load data from an Azure Storage account, see Create external data source to access data in Azure Storage using the wasb:// interface
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]'
Configure this optional value when connecting to Cloudera CDH, Hortonworks HDP, or an Azure Storage account only.
When the RESOURCE_MANAGER_LOCATION is defined, the Query Optimizer will make a cost-based decision to improve performance. A MapReduce job can be used to push down the computation to Hadoop. Specifying the RESOURCE_MANAGER_LOCATION can significantly reduce the volume of data transferred between Hadoop and SQL Server, which can lead to improved query performance.
If the Resource Manager isn't specified, pushing compute to Hadoop is disabled for PolyBase queries.
If the port isn't specified, the default value is chosen using the current setting for 'hadoop connectivity' configuration.
| Hadoop Connectivity | Default Resource Manager Port |
|---|---|
| 1 | 50300 |
| 2 | 50300 |
| 3 | 8021 |
| 4 | 8032 |
| 5 | 8050 |
| 6 | 8032 |
| 7 | 8050 |
| 8 | 8032 |
For a complete list of supported Hadoop versions, see PolyBase Connectivity Configuration (Transact-SQL).
Important
The RESOURCE_MANAGER_LOCATION value is not validated when you create the external data source. Entering an incorrect value may cause query failure at execution time whenever push-down is attempted as the provided value would not be able to resolve.
Create external data source to reference Hadoop with push-down enabled provides a concrete example and further guidance.
Permissions
Requires CONTROL permission on database in SQL Server.
Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.
Security
PolyBase supports proxy based authentication for most external data sources. Create a database scoped credential to create the proxy account.
When you connect to the storage or data pool in a SQL Server big data cluster, the user's credentials are passed through to the back-end system. Create logins in the data pool itself to enable pass through authentication.
An SAS token with type HADOOP is unsupported. It's only supported with type = BLOB_STORAGE when a storage account access key is used instead. Attempting to create an external data source with type HADOOP and a SAS credential fails with the following error:
Msg 105019, Level 16, State 1 - EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect. Java exception message: Parameters provided to connect to the Azure storage account are not valid.: Error [Parameters provided to connect to the Azure storage account are not valid.] occurred while accessing external file.'
Examples
Important
For information on how to install and enable PolyBase, see Install PolyBase on Windows
A. Create external data source to reference Hadoop
To create an external data source to reference your Hortonworks HDP or Cloudera CDH Hadoop cluster, specify the machine name, or IP address of the Hadoop Namenode and port.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
TYPE = HADOOP
) ;
B. Create external data source to reference Hadoop with push-down enabled
Specify the RESOURCE_MANAGER_LOCATION option to enable push-down computation to Hadoop for PolyBase queries. Once enabled, PolyBase makes a cost-based decision to determine whether the query computation should be pushed to Hadoop.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8020' ,
TYPE = HADOOP ,
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
) ;
C. Create external data source to reference Kerberos-secured Hadoop
To verify if the Hadoop cluster is Kerberos-secured, check the value of hadoop.security.authentication property in Hadoop core-site.xml. To reference a Kerberos-secured Hadoop cluster, you must specify a database scoped credential that contains your Kerberos username and password. The database master key is used to encrypt the database scoped credential secret.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Kerberos user name and password.
CREATE DATABASE SCOPED CREDENTIAL HadoopUser1
WITH
IDENTITY = '<hadoop_user_name>',
SECRET = '<hadoop_password>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
CREDENTIAL = HadoopUser1 ,
TYPE = HADOOP ,
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
);
D. Create external data source to access data in Azure Storage using the wasb:// interface
In this example, the external data source is an Azure V2 Storage account named logs. The storage container is called daily. The Azure Storage external data source is for data transfer only. It doesn't support predicate push-down. Hierarchical namespaces are not supported when accessing data via the wasb:// interface. Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
This example shows how to create the database scoped credential for authentication to an Azure V2 Storage account. Specify the Azure Storage account key in the database credential secret. You can specify any string in database scoped credential identity as it isn't used during authentication to Azure Storage.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = '<my_account>' ,
SECRET = '<azure_storage_account_key>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
( LOCATION = 'wasbs://daily@logs.blob.core.windows.net/' ,
CREDENTIAL = AzureStorageCredential ,
TYPE = HADOOP
) ;
Examples: Bulk Operations
Important
Do not add a trailing /, file name, or shared access signature parameters at the end of the LOCATION URL when configuring an external data source for bulk operations.
E. Create an external data source for bulk operations retrieving data from Azure Storage
Applies to: SQL Server 2017 (14.x) and later.
Use the following data source for bulk operations using BULK INSERT or OPENROWSET. The credential must set SHARED ACCESS SIGNATURE as the identity, mustn't have the leading ? in the SAS token, must have at least read permission on the file that should be loaded (for example srt=o&sp=r), and the expiration period should be valid (all dates are in UTC time). For more information on shared access signatures, see Using Shared Access Signatures (SAS).
CREATE DATABASE SCOPED CREDENTIAL AccessAzureInvoices
WITH
IDENTITY = 'SHARED ACCESS SIGNATURE',
-- Remove ? from the beginning of the SAS token
SECRET = '<azure_storage_account_key>' ;
CREATE EXTERNAL DATA SOURCE MyAzureInvoices
WITH
( LOCATION = 'https://newinvoices.blob.core.windows.net/week3' ,
CREDENTIAL = AccessAzureInvoices ,
TYPE = BLOB_STORAGE
) ;
To see this example in use, see the BULK INSERT example.
See also
Overview: SQL Server
Applies to:
SQL Server 2016 (13.x) and later
Creates an external data source for PolyBase queries. External data sources are used to establish connectivity and support these primary use cases:
- Data virtualization and data load using PolyBase
- Bulk load operations using
BULK INSERTorOPENROWSET
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
Syntax for SQL Server 2019
Note
This syntax varies between versions of SQL Server. Use the version selector dropdown to choose the appropriate version of SQL Server.
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH
( [ LOCATION = '<prefix>://<path>[:<port>]' ]
[ [ , ] CONNECTION_OPTIONS = '<key_value_pairs>'[,...]]
[ [ , ] CREDENTIAL = <credential_name> ]
[ [ , ] PUSHDOWN = { ON | OFF } ]
[ [ , ] TYPE = { HADOOP | BLOB_STORAGE } ]
[ [ , ] RESOURCE_MANAGER_LOCATION = '<resource_manager>[:<port>]' )
[ ; ]
Arguments
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the database in SQL Server.
LOCATION = '<prefix>://<path[:port]>'
Provides the connectivity protocol and path to the external data source.
| External Data Source | Location prefix | Location path | Supported locations by product / service |
|---|---|---|---|
| Cloudera CDH or Hortonworks HDP | hdfs |
<Namenode>[:port] |
Starting with SQL Server 2016 (13.x) |
| Azure Storage account(V2) | wasb[s] |
<container>@<storage_account>.blob.core.windows.net |
Starting with SQL Server 2016 (13.x) Hierarchical Namespace not supported |
| SQL Server | sqlserver |
<server_name>[\<instance_name>][:port] |
Starting with SQL Server 2019 (15.x) |
| Oracle | oracle |
<server_name>[:port] |
Starting with SQL Server 2019 (15.x) |
| Teradata | teradata |
<server_name>[:port] |
Starting with SQL Server 2019 (15.x) |
| MongoDB or Cosmos DB API for MongoDB | mongodb |
<server_name>[:port] |
Starting with SQL Server 2019 (15.x) |
| Generic ODBC | odbc |
<server_name>[:port] |
Starting with SQL Server 2019 (15.x) - Windows only |
| Bulk Operations | https |
<storage_account>.blob.core.windows.net/<container> |
Starting with SQL Server 2017 (14.x) |
| Azure Data Lake Storage Gen2 | abfs[s] |
abfss://<container>@<storage _account>.dfs.core.windows.net |
Starting with SQL Server 2019 (15.x) CU11+. |
| SQL Server Big Data Clusters data pool | sqldatapool |
sqldatapool://controller-svc/default |
Only supported in SQL Server Big Data Clusters |
| SQL Server Big Data Clusters storage pool | sqlhdfs |
sqlhdfs://controller-svc/default |
Only supported in SQL Server Big Data Clusters |
Location path:
<Namenode>= the machine name, name service URI, or IP address of theNamenodein the Hadoop cluster. PolyBase must resolve any DNS names used by the Hadoop cluster.port= The port that the external data source is listening on. In Hadoop, the port can be found using thefs.defaultFSconfiguration parameter. The default is 8020.<container>= the container of the storage account holding the data. Root containers are read-only, data can't be written back to the container.<storage_account>= the storage account name of the Azure resource.<server_name>= the host name.<instance_name>= the name of the SQL Server named instance. Used if you have SQL Server Browser Service running on the target instance.
Additional notes and guidance when setting the location:
- The SQL Server Database Engine doesn't verify the existence of the external data source when the object is created. To validate, create an external table using the external data source.
- Use the same external data source for all tables when querying Hadoop to ensure consistent querying semantics.
- You can use the
sqlserverlocation prefix to connect SQL Server 2019 (15.x) to another SQL Server, to Azure SQL Database, or to Azure Synapse Analytics. - Specify the
Driver={<Name of Driver>}when connecting viaODBC. - Using
wasbsorabfssis optional but recommended for accessing Azure Storage Accounts as data will be sent using a secure TLS/SSL connection. - The
abfsorabfssAPIs are supported when accessing Azure Storage Accounts starting with SQL Server 2019 (15.x) CU11. For more information, see the Azure Blob Filesystem driver (ABFS). - The Hierarchical Namespace option for Azure Storage Accounts(V2) using
abfs[s]is supported via Azure Data Lake Storage Gen2 starting with SQL Server 2019 (15.x) CU11+. The Hierarchical Namespace option is otherwise not supported, and this option should remain disabled. - To ensure successful PolyBase queries during a Hadoop
Namenodefail-over, consider using a virtual IP address for theNamenodeof the Hadoop cluster. If you don't, execute an ALTER EXTERNAL DATA SOURCE command to point to the new location. - The
sqlhdfsandsqldatapooltypes are supported for connecting between the master instance and storage pool of a big data cluster. For Cloudera CDH or Hortonworks HDP, usehdfs. For more information on usingsqlhdfsfor querying SQL Server Big Data Clusters storage pools, see Query HDFS in a SQL Server big data cluster.
CONNECTION_OPTIONS = key_value_pair
Specified for SQL Server 2019 (15.x) only. Specifies additional options when connecting over ODBC to an external data source. To use multiple connection options, separate them by a semi-colon.
Applies to generic ODBC connections, as well as built-in ODBC connectors for SQL Server, Oracle, Teradata, MongoDB, and Azure Cosmos DB API for MongoDB.
The key_value_pair is the keyword and the value for a specific connection option. The available keywords and values depend on the external data source type. The name of the driver is required as a minimum, but there are other options such as APP='<your_application_name>' or ApplicationIntent= ReadOnly|ReadWrite that are also useful to set and can assist with troubleshooting.
For more information, see:
PUSHDOWN = ON | OFF
Specified for SQL Server 2019 (15.x) only. States whether computation can be pushed down to the external data source. It is on by default.
PUSHDOWN is supported when connecting to SQL Server, Oracle, Teradata, MongoDB, the Azure Cosmos DB API for MongoDB, or ODBC at the external data source level.
Enabling or disabling push-down at the query level is achieved through a hint.
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Additional notes and guidance when creating a credential:
CREDENTIALis only required if the data has been secured.CREDENTIALisn't required for data sets that allow anonymous access.- When the
TYPE=BLOB_STORAGE, the credential must be created usingSHARED ACCESS SIGNATUREas the identity. Furthermore, the SAS token should be configured as follows:- Exclude the leading
?when configured as the secret. - Have at least read permission on the file that should be loaded (for example
srt=o&sp=r). - Use a valid expiration period (all dates are in UTC time).
TYPE=BLOB_STORAGEis only permitted for bulk operations; you cannot create external tables for an external data source withTYPE=BLOB_STORAGE.
- Exclude the leading
- Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
- When
TYPE=HADOOPthe credential must be created using the storage account key as theSECRET.
For an example of using a CREDENTIAL with SHARED ACCESS SIGNATURE and TYPE = BLOB_STORAGE, see Create an external data source to execute bulk operations and retrieve data from Azure Storage into SQL Database
To create a database scoped credential, see CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL).
TYPE = [ HADOOP | BLOB_STORAGE ]
Specifies the type of the external data source being configured. This parameter isn't always required, and should only be specified when connecting to Cloudera CDH, Hortonworks HDP, an Azure Storage account, or an Azure Data Lake Storage Gen2.
- In SQL Server 2019 (15.x), do not specify TYPE unless connecting to Cloudera CDH, Hortonworks HDP, an Azure Storage account.
- Use
HADOOPwhen the external data source is Cloudera CDH, Hortonworks HDP, an Azure Storage account, or an Azure Data Lake Storage Gen2. - Use
BLOB_STORAGEwhen executing bulk operations from Azure Storage account using BULK INSERT, or OPENROWSET with SQL Server 2017 (14.x). UseHADOOPwhen intending to CREATE EXTERNAL TABLE against Azure Storage.
For an example of using TYPE = HADOOP to load data from an Azure Storage account, see Create external data source to access data in Azure Storage using the wasb:// interface
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]'
Configure this optional value when connecting to Cloudera CDH, Hortonworks HDP, or an Azure Storage account only. In SQL Server 2019 (15.x), do not specify RESOURCE_MANAGER_LOCATION unless connecting to Cloudera CDH, Hortonworks HDP, an Azure Storage account.
When the RESOURCE_MANAGER_LOCATION is defined, the Query Optimizer will make a cost-based decision to improve performance. A MapReduce job can be used to push down the computation to Hadoop. Specifying the RESOURCE_MANAGER_LOCATION can significantly reduce the volume of data transferred between Hadoop and SQL Server, which can lead to improved query performance.
If the Resource Manager isn't specified, pushing compute to Hadoop is disabled for PolyBase queries.
If the port isn't specified, the default value is chosen using the current setting for 'hadoop connectivity' configuration.
| Hadoop Connectivity | Default Resource Manager Port |
|---|---|
| 1 | 50300 |
| 2 | 50300 |
| 3 | 8021 |
| 4 | 8032 |
| 5 | 8050 |
| 6 | 8032 |
| 7 | 8050 |
| 8 | 8032 |
For a complete list of supported Hadoop versions, see PolyBase Connectivity Configuration (Transact-SQL).
Important
The RESOURCE_MANAGER_LOCATION value is not validated when you create the external data source. Entering an incorrect value may cause query failure at execution time whenever push-down is attempted as the provided value would not be able to resolve.
Create external data source to reference Hadoop with push-down enabled provides a concrete example and further guidance.
Permissions
Requires CONTROL permission on database in SQL Server.
Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.
Security
PolyBase supports proxy based authentication for most external data sources. Create a database scoped credential to create the proxy account.
When you connect to the storage or data pool in a SQL Server big data cluster, the user's credentials are passed through to the back-end system. Create logins in the data pool itself to enable pass through authentication.
An SAS token with type HADOOP is unsupported. It's only supported with type = BLOB_STORAGE when a storage account access key is used instead. Attempting to create an external data source with type HADOOP and a SAS credential fails with the following error:
Msg 105019, Level 16, State 1 - EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect. Java exception message: Parameters provided to connect to the Azure storage account are not valid.: Error [Parameters provided to connect to the Azure storage account are not valid.] occurred while accessing external file.'
Examples
Important
For information on how to install and enable PolyBase, see Install PolyBase on Windows
A. Create external data source in SQL Server 2019 to reference Oracle
To create an external data source that references Oracle, ensure you have a database scoped credential. You may optionally also enable or disable push-down of computation against this data source.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL OracleProxyAccount
WITH
IDENTITY = 'oracle_username',
SECRET = 'oracle_password' ;
CREATE EXTERNAL DATA SOURCE MyOracleServer
WITH
( LOCATION = 'oracle://145.145.145.145:1521',
CREDENTIAL = OracleProxyAccount,
PUSHDOWN = ON
) ;
For additional examples to other data sources such as MongoDB, see Configure PolyBase to access external data in MongoDB.
B. Create external data source to reference Hadoop
To create an external data source to reference your Hortonworks HDP or Cloudera CDH Hadoop cluster, specify the machine name, or IP address of the Hadoop Namenode and port.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
TYPE = HADOOP
) ;
C. Create external data source to reference Hadoop with push-down enabled
Specify the RESOURCE_MANAGER_LOCATION option to enable push-down computation to Hadoop for PolyBase queries. Once enabled, PolyBase makes a cost-based decision to determine whether the query computation should be pushed to Hadoop.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8020' ,
TYPE = HADOOP ,
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
) ;
D. Create external data source to reference Kerberos-secured Hadoop
To verify if the Hadoop cluster is Kerberos-secured, check the value of hadoop.security.authentication property in Hadoop core-site.xml. To reference a Kerberos-secured Hadoop cluster, you must specify a database scoped credential that contains your Kerberos username and password. The database master key is used to encrypt the database scoped credential secret.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Kerberos user name and password.
CREATE DATABASE SCOPED CREDENTIAL HadoopUser1
WITH
IDENTITY = '<hadoop_user_name>',
SECRET = '<hadoop_password>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
CREDENTIAL = HadoopUser1 ,
TYPE = HADOOP ,
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
);
E. Create external data source to access data in Azure Storage using the wasb:// interface
In this example, the external data source is an Azure V2 Storage account named logs. The storage container is called daily. The Azure Storage external data source is for data transfer only. It doesn't support predicate push-down. Hierarchical namespaces are not supported when accessing data via the wasb:// interface. Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
This example shows how to create the database scoped credential for authentication to an Azure V2 Storage account. Specify the Azure Storage account key in the database credential secret. You can specify any string in database scoped credential identity as it isn't used during authentication to Azure Storage.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = '<my_account>' ,
SECRET = '<azure_storage_account_key>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
( LOCATION = 'wasbs://daily@logs.blob.core.windows.net/' ,
CREDENTIAL = AzureStorageCredential ,
TYPE = HADOOP
) ;
F. Create external data source to reference a SQL Server named instance via PolyBase connectivity
Applies to: SQL Server 2019 (15.x) and later
To create an external data source that references a named instance of SQL Server, use CONNECTION_OPTIONS to specify the instance name.
In example below, WINSQL2019 is the host name and SQL2019 is the instance name. 'Server=%s\SQL2019' is the key value pair.
CREATE EXTERNAL DATA SOURCE SQLServerInstance2
WITH (
LOCATION = 'sqlserver://WINSQL2019' ,
CONNECTION_OPTIONS = 'Server=%s\SQL2019' ,
CREDENTIAL = SQLServerCredentials
) ;
Alternatively, you can use a port to connect to a SQL Server instance.
CREATE EXTERNAL DATA SOURCE SQLServerInstance2
WITH (
LOCATION = 'sqlserver://WINSQL2019:58137' ,
CREDENTIAL = SQLServerCredentials
) ;
Examples: Bulk Operations
Important
Do not add a trailing /, file name, or shared access signature parameters at the end of the LOCATION URL when configuring an external data source for bulk operations.
G. Create an external data source for bulk operations retrieving data from Azure Storage
Applies to: SQL Server 2017 (14.x) and later.
Use the following data source for bulk operations using BULK INSERT or OPENROWSET. The credential must set SHARED ACCESS SIGNATURE as the identity, mustn't have the leading ? in the SAS token, must have at least read permission on the file that should be loaded (for example srt=o&sp=r), and the expiration period should be valid (all dates are in UTC time). For more information on shared access signatures, see Using Shared Access Signatures (SAS).
CREATE DATABASE SCOPED CREDENTIAL AccessAzureInvoices
WITH
IDENTITY = 'SHARED ACCESS SIGNATURE',
-- Remove ? from the beginning of the SAS token
SECRET = '<azure_shared_access_signature>' ;
CREATE EXTERNAL DATA SOURCE MyAzureInvoices
WITH
( LOCATION = 'https://newinvoices.blob.core.windows.net/week3' ,
CREDENTIAL = AccessAzureInvoices ,
TYPE = BLOB_STORAGE
) ;
To see this example in use, see the BULK INSERT example.
H. Create external data source to access data in Azure Storage using the abfs:// interface
Applies to: SQL Server 2019 (15.x) CU11 and later
In this example, the external data source is an Azure Data Lake Storage Gen2 account logs, using the Azure Blob Filesystem driver (ABFS). The storage container is called daily. The Azure Data Lake Storage Gen2 external data source is for data transfer only, as predicate push-down is not supported.
This example shows how to create the database scoped credential for authentication to an Azure Data Lake Storage Gen2 account. Specify the Azure Storage account key in the database credential secret. You can specify any string in database scoped credential identity as it isn't used during authentication to Azure Storage.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = '<my_account>' ,
SECRET = '<azure_storage_account_key>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
( LOCATION = 'abfss://daily@logs.dfs.core.windows.net/' ,
CREDENTIAL = AzureStorageCredential ,
TYPE = HADOOP
) ;
See also
* SQL Database *
Overview: Azure SQL Database
Applies to:
Azure SQL Database
Creates an external data source for elastic queries. External data sources are used to establish connectivity and support these primary use cases:
- Bulk load operations using
BULK INSERTorOPENROWSET - Query remote SQL Database or Azure Synapse instances using SQL Database with elastic query
- Query a sharded SQL Database using elastic query
Syntax
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH
( [ LOCATION = '<prefix>://<path>[:<port>]' ]
[ [ , ] CREDENTIAL = <credential_name> ]
[ [ , ] TYPE = { BLOB_STORAGE | RDBMS | SHARD_MAP_MANAGER } ]
[ [ , ] DATABASE_NAME = '<database_name>' ]
[ [ , ] SHARD_MAP_NAME = '<shard_map_manager>' ] )
[ ; ]
Arguments
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the database in SQL Database.
LOCATION = '<prefix>://<path[:port]>'
Provides the connectivity protocol and path to the external data source.
| External Data Source | Location prefix | Location path | Availability |
|---|---|---|---|
| Bulk Operations | https |
<storage_account>.blob.core.windows.net/<container> |
|
| Elastic Query (shard) | Not required | <shard_map_server_name>.database.windows.net |
|
| Elastic Query (remote) | Not required | <remote_server_name>.database.windows.net |
|
| EdgeHub | edgehub |
'edgehub://' | Available in Azure SQL Edge only. EdgeHub is always local to the instance of Azure SQL Edge. As such there is no need to specify a path or port value. |
| Kafka | kafka |
kafka://<kafka_bootstrap_server_name_ip>:<port_number> |
Available in Azure SQL Edge only. |
Location path:
<shard_map_server_name>= The logical server name in Azure that is hosting the shard map manager. TheDATABASE_NAMEargument provides the database used to host the shard map andSHARD_MAP_NAMEis used for the shard map itself.<remote_server_name>= The target logical server name for the elastic query. The database name is specified using theDATABASE_NAMEargument.
Additional notes and guidance when setting the location:
- The Database Engine doesn't verify the existence of the external data source when the object is created. To validate, create an external table using the external data source.
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Additional notes and guidance when creating a credential:
- To load data from Azure Storage into Azure SQL Database, use a Shared Access Signature (SAS token).
CREDENTIALis only required if the data has been secured.CREDENTIALisn't required for data sets that allow anonymous access.- When the
TYPE=BLOB_STORAGE, the credential must be created usingSHARED ACCESS SIGNATUREas the identity. Furthermore, the SAS token should be configured as follows:- Exclude the leading
?when configured as the secret - Have at least read permission on the file that should be loaded (for example
srt=o&sp=r) - Use a valid expiration period (all dates are in UTC time).
TYPE=BLOB_STORAGEis only permitted for bulk operations; you cannot create external tables for an external data source withTYPE=BLOB_STORAGE.
- Exclude the leading
- Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
- When
TYPE=HADOOPthe credential must be created using the storage account key as theSECRET.
For an example of using a CREDENTIAL with SHARED ACCESS SIGNATURE and TYPE = BLOB_STORAGE, see Create an external data source to execute bulk operations and retrieve data from Azure Storage into SQL Database
To create a database scoped credential, see CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL).
TYPE = [ BLOB_STORAGE | RDBMS | SHARD_MAP_MANAGER]
Specifies the type of the external data source being configured. This parameter isn't always required.
- Use
RDBMSfor cross-database queries using elastic query from SQL Database. - Use
SHARD_MAP_MANAGERwhen creating an external data source when connecting to a sharded SQL Database. - Use
BLOB_STORAGEwhen executing bulk operations with BULK INSERT, or OPENROWSET.
Important
Do not set TYPE if using any other external data source.
DATABASE_NAME = database_name
Configure this argument when the TYPE is set to RDBMS or SHARD_MAP_MANAGER.
| TYPE | Value of DATABASE_NAME |
|---|---|
| RDBMS | The name of the remote database on the server provided using LOCATION |
| SHARD_MAP_MANAGER | Name of the database operating as the shard map manager |
For an example showing how to create an external data source where TYPE = RDBMS refer to Create an RDBMS external data source
SHARD_MAP_NAME = shard_map_name
Used when the TYPE argument is set to SHARD_MAP_MANAGER only to set the name of the shard map.
For an example showing how to create an external data source where TYPE = SHARD_MAP_MANAGER refer to Create a shard map manager external data source
Permissions
Requires CONTROL permission on database in Azure SQL Database.
Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.
Examples:
A. Create a shard map manager external data source
To create an external data source to reference a SHARD_MAP_MANAGER, specify the SQL Database server name that hosts the shard map manager in SQL Database or a SQL Server database on a virtual machine.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred
WITH
IDENTITY = '<username>',
SECRET = '<password>' ;
CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc
WITH
( TYPE = SHARD_MAP_MANAGER ,
LOCATION = '<server_name>.database.windows.net' ,
DATABASE_NAME = 'ElasticScaleStarterKit_ShardMapManagerDb' ,
CREDENTIAL = ElasticDBQueryCred ,
SHARD_MAP_NAME = 'CustomerIDShardMap'
) ;
For a step-by-step tutorial, see Getting started with elastic queries for sharding (horizontal partitioning).
B. Create an RDBMS external data source
To create an external data source to reference an RDBMS, specifies the SQL Database server name of the remote database in SQL Database.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
CREATE DATABASE SCOPED CREDENTIAL SQL_Credential
WITH
IDENTITY = '<username>' ,
SECRET = '<password>' ;
CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc
WITH
( TYPE = RDBMS ,
LOCATION = '<server_name>.database.windows.net' ,
DATABASE_NAME = 'Customers' ,
CREDENTIAL = SQL_Credential
) ;
For a step-by-step tutorial on RDBMS, see Getting started with cross-database queries (vertical partitioning).
Examples: Bulk Operations
Important
Do not add a trailing /, file name, or shared access signature parameters at the end of the LOCATION URL when configuring an external data source for bulk operations.
C. Create an external data source for bulk operations retrieving data from Azure Storage
Use the following data source for bulk operations using BULK INSERT or OPENROWSET. The credential must set SHARED ACCESS SIGNATURE as the identity, mustn't have the leading ? in the SAS token, must have at least read permission on the file that should be loaded (for example srt=o&sp=r), and the expiration period should be valid (all dates are in UTC time). For more information on shared access signatures, see Using Shared Access Signatures (SAS).
CREATE DATABASE SCOPED CREDENTIAL AccessAzureInvoices
WITH
IDENTITY = 'SHARED ACCESS SIGNATURE',
-- Remove ? from the beginning of the SAS token
SECRET = '******srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z***************' ;
CREATE EXTERNAL DATA SOURCE MyAzureInvoices
WITH
( LOCATION = 'https://newinvoices.blob.core.windows.net/week3' ,
CREDENTIAL = AccessAzureInvoices ,
TYPE = BLOB_STORAGE
) ;
To see this example in use, see BULK INSERT.
Examples: Azure SQL Edge
Important
For information on configuring external data for Azure SQL Edge, see Data streaming in Azure SQL Edge.
A. Create external data source to reference Kafka
Applies to: Azure SQL Edge only
In this example, the external data source is a Kafka server with IP address xxx.xxx.xxx.xxx and listening on port 1900. The Kafka external data source is only for data streaming and does not support predicate push down.
-- Create an External Data Source for Kafka
CREATE EXTERNAL DATA SOURCE MyKafkaServer WITH (
LOCATION = 'kafka://xxx.xxx.xxx.xxx:1900'
)
GO
B. Create external data source to reference EdgeHub
Applies to: Azure SQL Edge only
In this example, the external data source is a EdgeHub running on the same edge device as Azure SQL Edge. The edgeHub external data source is only for data streaming and does not support predicate push down.
-- Create an External Data Source for Kafka
CREATE EXTERNAL DATA SOURCE MyEdgeHub WITH (
LOCATION = 'edgehub://'
)
go
See also
* Azure Synapse
Analytics *
Overview: Azure Synapse Analytics
Applies to:
Azure Synapse Analytics
Creates an external data source for PolyBase. External data sources are used to establish connectivity and support the following primary use case: Data virtualization and data load using PolyBase
Important
To create an external data source to query a Azure Synapse Analytics resource using Azure SQL Database with elastic query, see SQL Database.
Syntax
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH
( [ LOCATION = '<prefix>://<path>[:<port>]' ]
[ [ , ] CREDENTIAL = <credential_name> ]
[ [ , ] TYPE = HADOOP ]
[ ; ]
Arguments
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the Azure SQL Database in Azure Synapse Analytics.
LOCATION = '<prefix>://<path[:port]>'
Provides the connectivity protocol and path to the external data source.
| External Data Source | Location prefix | Location path |
|---|---|---|
| Azure Data Lake Store Gen 1 | adl |
<storage_account>.azuredatalake.net |
| Azure Data Lake Store Gen 2 | abfs[s] |
<container>@<storage_account>.dfs.core.windows.net |
| Azure V2 Storage account | wasb[s] |
<container>@<storage_account>.blob.core.windows.net |
Location path:
<container>= the container of the storage account holding the data. Root containers are read-only, data can't be written back to the container.<storage_account>= the storage account name of the Azure resource.
Additional notes and guidance when setting the location:
- The default option is to use
enable secure SSL connectionswhen provisioning Azure Data Lake Storage Gen2. When this is enabled, you must useabfsswhen a secure TLS/SSL connection is selected. Noteabfssworks for unsecure TLS connections as well. For more information, see the Azure Blob Filesystem driver (ABFS). - Azure Synapse doesn't verify the existence of the external data source when the object is created. To validate, create an external table using the external data source.
- Use the same external data source for all tables when querying Hadoop to ensure consistent querying semantics.
wasbsis recommended as data will be sent using a secure TLS connection.- Hierarchical Namespaces aren't supported with Azure V2 Storage Accounts when accessing data via PolyBase using the wasb:// interface.
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Additional notes and guidance when creating a credential:
- To load data from Azure Storage or Azure Data Lake Store (ADLS) Gen 2 into Azure Synapse Analytics, use an Azure Storage Key.
CREDENTIALis only required if the data has been secured.CREDENTIALisn't required for data sets that allow anonymous access.
To create a database scoped credential, see CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL).
TYPE = HADOOP
Specifies the type of the external data source being configured. This parameter isn't always required.
Use HADOOP when the external data source is Azure Storage, ADLS Gen 1, or ADLS Gen 2.
For an example of using TYPE = HADOOP to load data from Azure Storage, see Create external data source to reference Azure Data Lake Store Gen 1 or 2 using a service principal.
Permissions
Requires CONTROL permission on the database.
Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.
Security
PolyBase supports proxy based authentication for most external data sources. Create a database scoped credential to create the proxy account.
When you connect to the storage or data pool in a SQL Server big data cluster, the user's credentials are passed through to the back-end system. Create logins in the data pool itself to enable pass through authentication.
An SAS token with type HADOOP is unsupported. It's only supported with type = BLOB_STORAGE when a storage account access key is used instead. Attempting to create an external data source with type HADOOP and a SAS credential fails with the following error:
Msg 105019, Level 16, State 1 - EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect. Java exception message: Parameters provided to connect to the Azure storage account are not valid.: Error [Parameters provided to connect to the Azure storage account are not valid.] occurred while accessing external file.'
Examples:
A. Create external data source to access data in Azure Storage using the wasb:// interface
In this example, the external data source is an Azure V2 Storage account named logs. The storage container is called daily. The Azure Storage external data source is for data transfer only. It doesn't support predicate push-down. Hierarchical namespaces are not supported when accessing data via the wasb:// interface. Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
This example shows how to create the database scoped credential for authentication to Azure Storage. Specify the Azure Storage account key in the database credential secret. You can specify any string in database scoped credential identity as it isn't used during authentication to Azure storage.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = '<my_account>',
SECRET = '<azure_storage_account_key>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
( LOCATION = 'wasbs://daily@logs.blob.core.windows.net/' ,
CREDENTIAL = AzureStorageCredential ,
TYPE = HADOOP
) ;
B. Create external data source to reference Azure Data Lake Store Gen 1 or 2 using a service principal
Azure Data Lake Store connectivity can be based on your ADLS URI and your Azure Active directory Application's service principal. Documentation for creating this application can be found at Data lake store authentication using Active Directory.
-- If you do not have a Master Key on your DW you will need to create one.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- These values come from your Azure Active Directory Application used to authenticate to ADLS
CREATE DATABASE SCOPED CREDENTIAL ADLS_credential
WITH
-- IDENTITY = '<clientID>@<OAuth2.0TokenEndPoint>' ,
IDENTITY = '536540b4-4239-45fe-b9a3-629f97591c0c@https://login.microsoftonline.com/42f988bf-85f1-41af-91ab-2d2cd011da47/oauth2/token' ,
-- SECRET = '<KEY>'
SECRET = 'BjdIlmtKp4Fpyh9hIvr8HJlUida/seM5kQ3EpLAmeDI='
;
-- For Gen 1 - Create an external data source
-- TYPE: HADOOP - PolyBase uses Hadoop APIs to access data in Azure Data Lake Storage.
-- LOCATION: Provide Data Lake Storage Gen 1 account name and URI
-- CREDENTIAL: Provide the credential created in the previous step
CREATE EXTERNAL DATA SOURCE AzureDataLakeStore
WITH
( LOCATION = 'adl://newyorktaxidataset.azuredatalakestore.net' ,
CREDENTIAL = ADLS_credential ,
TYPE = HADOOP
) ;
-- For Gen 2 - Create an external data source
-- TYPE: HADOOP - PolyBase uses Hadoop APIs to access data in Azure Data Lake Storage.
-- LOCATION: Provide Data Lake Storage Gen 2 account name and URI
-- CREDENTIAL: Provide the credential created in the previous step
CREATE EXTERNAL DATA SOURCE AzureDataLakeStore
WITH
-- Please note the abfss endpoint when your account has secure transfer enabled
( LOCATION = 'abfss://data@newyorktaxidataset.dfs.core.windows.net' ,
CREDENTIAL = ADLS_credential ,
TYPE = HADOOP
) ;
C. Create external data source to reference Azure Data Lake Store Gen 2 using the storage account key
-- If you do not have a Master Key on your DW you will need to create one.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
CREATE DATABASE SCOPED CREDENTIAL ADLS_credential
WITH
-- IDENTITY = '<storage_account_name>' ,
IDENTITY = 'newyorktaxidata' ,
-- SECRET = '<storage_account_key>'
SECRET = 'yz5N4+bxSb89McdiysJAzo+9hgEHcJRJuXbF/uC3mhbezES/oe00vXnZEl14U0lN3vxrFKsphKov16C0w6aiTQ=='
;
-- Note this example uses a Gen 2 secured endpoint (abfss)
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH
( LOCATION = 'abfss://2013@newyorktaxidataset.dfs.core.windows.net' ,
CREDENTIAL = ADLS_credential ,
TYPE = HADOOP
) ;
D. Create external data source to reference Polybase connectivity to Azure Data Lake Store Gen 2 using abfs://
There is no need to specify SECRET when connecting to Azure Data Lake Store Gen2 account with Managed Identity mechanism.
-- If you do not have a Master Key on your DW you will need to create one
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
--Create database scoped credential with **IDENTITY = 'Managed Service Identity'**
CREATE DATABASE SCOPED CREDENTIAL msi_cred
WITH IDENTITY = 'Managed Service Identity' ;
--Create external data source with abfss:// scheme for connecting to your Azure Data Lake Store Gen2 account
CREATE EXTERNAL DATA SOURCE ext_datasource_with_abfss
WITH
( TYPE = HADOOP ,
LOCATION = 'abfss://myfile@mystorageaccount.dfs.core.windows.net' ,
CREDENTIAL = msi_cred
) ;
See also
- CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL)
- CREATE EXTERNAL FILE FORMAT (Transact-SQL)
- CREATE EXTERNAL TABLE (Transact-SQL)
- CREATE EXTERNAL TABLE AS SELECT (Azure Synapse Analytics)
- CREATE TABLE AS SELECT (Azure Synapse Analytics)
- sys.external_data_sources (Transact-SQL)
- Using Shared Access Signatures (SAS)
* Analytics
Platform System (PDW) *
Overview: Analytics Platform System
Applies to:
Analytics Platform System (PDW)
Creates an external data source for PolyBase queries. External data sources are used to establish connectivity and support the following use case: Data virtualization and data load using PolyBase.
Syntax
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH
( [ LOCATION = '<prefix>://<path>[:<port>]' ]
[ [ , ] CREDENTIAL = <credential_name> ]
[ [ , ] TYPE = HADOOP ]
[ [ , ] RESOURCE_MANAGER_LOCATION = '<resource_manager>[:<port>]' )
[ ; ]
Arguments
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the server in Analytics Platform System (PDW).
LOCATION = '<prefix>://<path[:port]>'
Provides the connectivity protocol and path to the external data source.
| External Data Source | Location prefix | Location path |
|---|---|---|
| Cloudera CDH or Hortonworks HDP | hdfs |
<Namenode>[:port] |
| Azure Storage Account | wasb[s] |
<container>@<storage_account>.blob.core.windows.net |
Location path:
<Namenode>= the machine name, name service URI, or IP address of theNamenodein the Hadoop cluster. PolyBase must resolve any DNS names used by the Hadoop cluster.port= The port that the external data source is listening on. In Hadoop, the port can be found using thefs.defaultFSconfiguration parameter. The default is 8020.<container>= the container of the storage account holding the data. Root containers are read-only, data can't be written back to the container.<storage_account>= the storage account name of the Azure resource.
Additional notes and guidance when setting the location:
- The PDW engine doesn't verify the existence of the external data source when the object is created. To validate, create an external table using the external data source.
- Use the same external data source for all tables when querying Hadoop to ensure consistent querying semantics.
wasbsis recommended as data will be sent using a secure TLS connection.- Hierarchical Namespaces are not supported when used with Azure Storage accounts over wasb://.
- To ensure successful PolyBase queries during a Hadoop
Namenodefail-over, consider using a virtual IP address for theNamenodeof the Hadoop cluster. If you don't, execute an ALTER EXTERNAL DATA SOURCE command to point to the new location.
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Additional notes and guidance when creating a credential:
- To load data from Azure Storage into Azure Synapse or PDW, use an Azure Storage Key.
CREDENTIALis only required if the data has been secured.CREDENTIALisn't required for data sets that allow anonymous access.
TYPE = [ HADOOP ]
Specifies the type of the external data source being configured. This parameter isn't always required.
- Use HADOOP when the external data source is Cloudera CDH, Hortonworks HDP, or Azure Storage.
For an example of using TYPE = HADOOP to load data from Azure Storage, see Create external data source to reference Hadoop.
RESOURCE_MANAGER_LOCATION = 'ResourceManager_URI[:port]'
Configure this optional value when connecting to Cloudera CDH, Hortonworks HDP, or an Azure Storage account only.
When the RESOURCE_MANAGER_LOCATION is defined, the query optimizer will make a cost-based decision to improve performance. A MapReduce job can be used to push down the computation to Hadoop. Specifying the RESOURCE_MANAGER_LOCATION can significantly reduce the volume of data transferred between Hadoop and SQL, which can lead to improved query performance.
If the Resource Manager isn't specified, pushing compute to Hadoop is disabled for PolyBase queries.
If the port isn't specified, the default value is chosen using the current setting for 'hadoop connectivity' configuration.
| Hadoop Connectivity | Default Resource Manager Port |
|---|---|
| 1 | 50300 |
| 2 | 50300 |
| 3 | 8021 |
| 4 | 8032 |
| 5 | 8050 |
| 6 | 8032 |
| 7 | 8050 |
For a complete list of supported Hadoop versions, see PolyBase Connectivity Configuration (Transact-SQL).
Important
The RESOURCE_MANAGER_LOCATION value is not validated when you create the external data source. Entering an incorrect value may cause query failure at execution time whenever push-down is attempted as the provided value would not be able to resolve.
Create external data source to reference Hadoop with push-down enabled provides a concrete example and further guidance.
Permissions
Requires CONTROL permission on database in Analytics Platform System (PDW).
Note
In previous releases of PDW, create external data source required ALTER ANY EXTERNAL DATA SOURCE permissions.
Locking
Takes a shared lock on the EXTERNAL DATA SOURCE object.
Security
PolyBase supports proxy based authentication for most external data sources. Create a database scoped credential to create the proxy account.
An SAS token with type HADOOP is unsupported. It's only supported with type = BLOB_STORAGE when a storage account access key is used instead. Attempting to create an external data source with type HADOOP and a SAS credential fails with the following error:
Msg 105019, Level 16, State 1 - EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect. Java exception message: Parameters provided to connect to the Azure storage account are not valid.: Error [Parameters provided to connect to the Azure storage account are not valid.] occurred while accessing external file.'
Examples:
A. Create external data source to reference Hadoop
To create an external data source to reference your Hortonworks HDP or Cloudera CDH, specify the machine name, or IP address of the Hadoop Namenode and port.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
TYPE = HADOOP
) ;
B. Create external data source to reference Hadoop with push-down enabled
Specify the RESOURCE_MANAGER_LOCATION option to enable push-down computation to Hadoop for PolyBase queries. Once enabled, PolyBase makes a cost-based decision to determine whether the query computation should be pushed to Hadoop.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8020'
TYPE = HADOOP
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
) ;
C. Create external data source to reference Kerberos-secured Hadoop
To verify if the Hadoop cluster is Kerberos-secured, check the value of hadoop.security.authentication property in Hadoop core-site.xml. To reference a Kerberos-secured Hadoop cluster, you must specify a database scoped credential that contains your Kerberos username and password. The database master key is used to encrypt the database scoped credential secret.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Kerberos user name and password.
CREATE DATABASE SCOPED CREDENTIAL HadoopUser1
WITH
IDENTITY = '<hadoop_user_name>' ,
SECRET = '<hadoop_password>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyHadoopCluster
WITH
( LOCATION = 'hdfs://10.10.10.10:8050' ,
CREDENTIAL = HadoopUser1 ,
TYPE = HADOOP ,
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8050'
) ;
D. Create external data source to access data in Azure Storage using the wasb:// interface
In this example, the external data source is an Azure V2 Storage account named logs. The storage container is called daily. The Azure Storage external data source is for data transfer only. It doesn't support predicate push-down. Hierarchical namespaces are not supported when accessing data via the wasb:// interface. Note that when connecting to the Azure Storage via the WASB[s] connector, authentication must be done with a storage account key, not with a shared access signature (SAS).
This example shows how to create the database scoped credential for authentication to Azure storage. Specify the Azure storage account key in the database credential secret. You can specify any string in database scoped credential identity as it isn't used during authentication to Azure storage.
-- Create a database master key if one does not already exist, using your own password. This key is used to encrypt the credential secret in next step.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
-- Create a database scoped credential with Azure storage account key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = '<my_account>' ,
SECRET = '<azure_storage_account_key>' ;
-- Create an external data source with CREDENTIAL option.
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
( LOCATION = 'wasbs://daily@logs.blob.core.windows.net/'
CREDENTIAL = AzureStorageCredential
TYPE = HADOOP
) ;
See also
Note
Some functionality of the PolyBase feature is in private preview for Azure SQL managed instances, including the ability to query external data (Parquet files) in Azure Data Lake Storage (ADLS) Gen2. Private preview includes access to client libraries and documentation for testing purposes that are not yet available publicly. If you are interested and ready to invest some time in trying out the functionalities and sharing your feedback and questions, please review the Azure SQL Managed Instance PolyBase Private Preview Guide.