BULK INSERT (Transact-SQL)BULK INSERT (Transact-SQL)

本主题适用于:yesSQL Server(从 2008 开始)yesAzure SQL 数据库noAzure SQL 数据仓库no并行数据仓库THIS TOPIC APPLIES TO:yesSQL Server (starting with 2008)yesAzure SQL DatabasenoAzure SQL Data Warehouse noParallel Data Warehouse

SQL ServerSQL Server 中以用户指定的格式将数据文件导入到数据库表或视图中Imports a data file into a database table or view in a user-specified format in SQL ServerSQL Server

重要

Azure SQL 数据库托管实例上,此 T-SQL 功能有一定的行为变更。On Azure SQL Database Managed Instance, this T-SQL feature has certain behavior changes. 有关所有 T-SQL 行为变更的详细信息,请参阅 Azure SQL 数据库托管实例与 SQL Server 之间的 T-SQL 差异See Azure SQL Database Managed Instance T-SQL differences from SQL Server for details for all T-SQL behavior changes.

主题链接图标 TRANSACT-SQL 语法约定Topic link icon Transact-SQL Syntax Conventions

语法Syntax


BULK INSERT   
   [ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ]   
      FROM 'data_file'   
     [ WITH   
    (   
   [ [ , ] BATCHSIZE = batch_size ]   
   [ [ , ] CHECK_CONSTRAINTS ]   
   [ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]   
   [ [ , ] DATAFILETYPE =   
      { 'char' | 'native'| 'widechar' | 'widenative' } ]   
   [ [ , ] DATASOURCE = 'data_source_name' ]
   [ [ , ] ERRORFILE = 'file_name' ]
   [ [ , ] ERRORFILE_DATA_SOURCE = 'data_source_name' ]   
   [ [ , ] FIRSTROW = first_row ]   
   [ [ , ] FIRE_TRIGGERS ]   
   [ [ , ] FORMATFILE_DATASOURCE = 'data_source_name' ]
   [ [ , ] KEEPIDENTITY ]   
   [ [ , ] KEEPNULLS ]   
   [ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]   
   [ [ , ] LASTROW = last_row ]   
   [ [ , ] MAXERRORS = max_errors ]   
   [ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]   
   [ [ , ] ROWS_PER_BATCH = rows_per_batch ]   
   [ [ , ] ROWTERMINATOR = 'row_terminator' ]   
   [ [ , ] TABLOCK ]   

   -- input file format options
   [ [ , ] FORMAT = 'CSV' ]
   [ [ , ] FIELDQUOTE = 'quote_characters']
   [ [ , ] FORMATFILE = 'format_file_path' ]   
   [ [ , ] FIELDTERMINATOR = 'field_terminator' ]   
   [ [ , ] ROWTERMINATOR = 'row_terminator' ]   
    )]   

参数Arguments

database_namedatabase_name
指定的表或视图所在的数据库的名称。Is the database name in which the specified table or view resides. 如果未指定,则默认为当前数据库。If not specified, this is the current database.

schema_nameschema_name
表或视图架构的名称。Is the name of the table or view schema. 如果用户执行批量导入操作的默认架构为指定表或视图的架构,则 schema_name 是可选的。schema_name is optional if the default schema for the user performing the bulk-import operation is schema of the specified table or view. 如果未指定 schema 且用户执行批量导入操作的默认架构不是指定表或视图的架构,则 SQL ServerSQL Server 会返回一条错误消息,同时取消批量导入操作。If schema is not specified and the default schema of the user performing the bulk-import operation is different from the specified table or view, SQL ServerSQL Server returns an error message, and the bulk-import operation is canceled.

table_nametable_name
要将数据大容量导入其中的表或视图的名称。Is the name of the table or view to bulk import data into. 只能使用所有列均引用相同基表的视图。Only views in which all columns refer to the same base table can be used. 有关将数据加载到视图中的限制的详细信息,请参阅 INSERT (Transact-SQL)For more information about the restrictions for loading data into views, see INSERT (Transact-SQL).

' data_file '' data_file '
数据文件的完整路径,该数据文件包含要导入到指定表或视图中的数据。Is the full path of the data file that contains data to import into the specified table or view. 使用 BULK INSERT 可以从磁盘(包括网络、软盘、硬盘等)导入数据。BULK INSERT can import data from a disk (including network, floppy disk, hard disk, and so on).

data_file 必须基于运行 SQL ServerSQL Server 的服务器指定一个有效路径。data_file must specify a valid path from the server on which SQL ServerSQL Server is running. 如果 data_file 为远程文件,则指定通用命名约定 (UNC) 名称。If data_file is a remote file, specify the Universal Naming Convention (UNC) name. UNC 名称采用以下格式:\\系统名称\共享名称\路径\文件名A UNC name has the form \\Systemname\ShareName\Path\FileName. 例如, \\SystemX\DiskZ\Sales\update.txtFor example, \\SystemX\DiskZ\Sales\update.txt.
适用于: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1.
SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1 开始,data_file 可位于 Azure Blob 存储中。Beginning with SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP1.1, the data_file can be in Azure blob storage.

' data_source_name ' ' data_source_name '
适用于: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1.
命名的外部数据源,指向将导入文件的 Azure Blob 存储位置。Is a named external data source pointing to the Azure Blob storage location of the file that will be imported. 外部数据源必须使用 SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1 中添加的 TYPE = BLOB_STORAGE 选项创建。The external data source must be created using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1. 有关详细信息,请参阅 CREATE EXTERNAL DATA SOURCEFor more information, see CREATE EXTERNAL DATA SOURCE.

BATCHSIZE =batch_sizeBATCHSIZE =batch_size
指定批处理中的行数。Specifies the number of rows in a batch. 每个批处理作为一个事务复制到服务器。Each batch is copied to the server as one transaction. 如果复制操作失败,则 SQL ServerSQL Server 将提交或回滚每个批处理的事务。If this fails, SQL ServerSQL Server commits or rolls back the transaction for every batch. 默认情况下,指定数据文件中的所有数据为一个批处理。By default, all data in the specified data file is one batch. 有关性能注意事项的信息,请参阅本主题后面的“备注”。For information about performance considerations, see "Remarks," later in this topic.

CHECK_CONSTRAINTSCHECK_CONSTRAINTS
指定在批量导入操作期间,必须检查所有对目标表或视图的约束。Specifies that all constraints on the target table or view must be checked during the bulk-import operation. 若没有 CHECK_CONSTRAINTS 选项,则忽略所有 CHECK 和 FOREIGN KEY 约束,并在该操作后将表的约束标记为不可信。Without the CHECK_CONSTRAINTS option, any CHECK and FOREIGN KEY constraints are ignored, and after the operation, the constraint on the table is marked as not-trusted.

备注

始终强制使用 UNIQUE 和 PRIMARY KEY 约束。UNIQUE, and PRIMARY KEY constraints are always enforced. 导入使用 NOT NULL 约束定义的字符列时,当文本文件中没有值时,BULK INSERT 插入一个空白字符串。When importing into a character column that is defined with a NOT NULL constraint, BULK INSERT inserts a blank string when there is no value in the text file.

有时必须检查针对整个表的约束。At some point, you must examine the constraints on the whole table. 执行大容量导入操作之前,如果表不为空,则重新验证约束的代价可能会超出对增量数据应用 CHECK 约束的代价。If the table was non-empty before the bulk-import operation, the cost of revalidating the constraint may exceed the cost of applying CHECK constraints to the incremental data.

当输入数据包含违反约束的行时,您可能希望禁用约束(默认行为)。A situation in which you might want constraints disabled (the default behavior) is if the input data contains rows that violate constraints. 禁用 CHECK 约束后,您可以导入数据并使用 Transact-SQLTransact-SQL 语句删除无效数据。With CHECK constraints disabled, you can import the data and then use Transact-SQLTransact-SQL statements to remove the invalid data.

备注

MAXERRORS 选项不适用于约束检查。The MAXERRORS option does not apply to constraint checking.

CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' }*CODEPAGE *= { ' ACP ' | ' OEM ' | ' RAW ' | 'code_page' }
指定该数据文件中数据的代码页。Specifies the code page of the data in the data file. 仅当数据含有字符值大于 127 或小于 32charvarchartext 列时,CODEPAGE 才适用。CODEPAGE is relevant only if the data contains char, varchar, or text columns with character values greater than 127 or less than 32.

重要

CODEPAGE 不是 Linux 上的支持项。CODEPAGE is not a supported option on Linux.

备注

MicrosoftMicrosoft 建议在格式化文件中为每个列指定一个排序规则名称。 recommends that you specify a collation name for each column in a format file.

CODEPAGE 值CODEPAGE value 描述Description
ACPACP 数据类型为 charvarchartext 的列从 ANSIANSI/ MicrosoftMicrosoft Windows 代码页 (ISO 1252) 转换为 SQL ServerSQL Server 代码页。Columns of char, varchar, or text data type are converted from the ANSIANSI/ MicrosoftMicrosoft Windows code page (ISO 1252) to the SQL ServerSQL Server code page.
OEM(默认值)OEM (default) 数据类型为 charvarchartext 的列从系统 OEM 代码页转换为 SQL ServerSQL Server 代码页。Columns of char, varchar, or text data type are converted from the system OEM code page to the SQL ServerSQL Server code page.
RAWRAW 不进行从一个代码页到另一个代码页的转换;这是最快的选项。No conversion from one code page to another occurs; this is the fastest option.
code_pagecode_page 特定的代码页码,例如 850。Specific code page number, for example, 850.

**重要提示*\* 低于 SQL Server 2016 (13.x)SQL Server 2016 (13.x) 的版本不支持代码页 65001(UTF-8 编码)。** Important *\* Versions prior to SQL Server 2016 (13.x)SQL Server 2016 (13.x) do not support code page 65001 (UTF-8 encoding).

DATAFILETYPE = { 'char' | 'native' | 'widechar' | 'widenative' }DATAFILETYPE = { 'char' | 'native' | 'widechar' | 'widenative' }
指定 BULK INSERT 使用指定的数据文件类型值执行导入操作。Specifies that BULK INSERT performs the import operation using the specified data-file type value.

DATAFILETYPE 值DATAFILETYPE value 所有数据都表示为:All data represented in:
char(默认值)char (default) 字符格式。Character format.

有关详细信息,请参阅使用字符格式导入或导出数据 (SQL Server)For more information, see Use Character Format to Import or Export Data (SQL Server).
nativenative 本机(数据库)数据类型。Native (database) data types. 通过使用 bcp 实用工具从 SQL ServerSQL Server 批量导入数据来创建本机数据文件。Create the native data file by bulk importing data from SQL ServerSQL Server using the bcp utility.

与 char 值相比,本机值提供更高的性能。The native value offers a higher performance alternative to the char value.

有关详细信息,请参阅使用本机格式导入或导出数据 (SQL Server)For more information, see Use Native Format to Import or Export Data (SQL Server).
widecharwidechar Unicode 字符。Unicode characters.

有关详细信息,请参阅 使用 Unicode 字符格式导入或导出数据 (SQL Server)For more information, see Use Unicode Character Format to Import or Export Data (SQL Server).
widenativewidenative 本机(数据库)数据类型,charvarchartext 列除外,这些列中的数据均以 Unicode 格式存储。Native (database) data types, except in char, varchar, and text columns, in which data is stored as Unicode. 通过使用 bcp 实用工具从 SQL ServerSQL Server 批量导入数据来创建 widenative 数据文件。Create the widenative data file by bulk importing data from SQL ServerSQL Server using the bcp utility.

widechar 相比,widenative 值可以提供更高的性能。The widenative value offers a higher performance alternative to widechar. 如果数据文件包含 ANSIANSI 扩展字符,则指定 widenativeIf the data file contains ANSIANSI extended characters, specify widenative.

有关详细信息信息,请参阅 使用 Unicode 本机格式导入或导出数据 (SQL Server)For more information, see Use Unicode Native Format to Import or Export Data (SQL Server).

ERRORFILE ='file_name'*ERRORFILE *='file_name'
指定用于收集格式有误且不能转换为 OLE DB 行集的行的文件。Specifies the file used to collect rows that have formatting errors and cannot be converted to an OLE DB rowset. 这些行将按原样从数据文件复制到此错误文件中。These rows are copied into this error file from the data file "as is."

错误文件是执行命令时创建的。The error file is created when the command is executed. 如果该文件已经存在,则会发生错误。An error occurs if the file already exists. 此外,还创建了一个扩展名为 .ERROR.txt 的控制文件。Additionally, a control file that has the extension .ERROR.txt is created. 此文件引用错误文件中的每一行并提供错误诊断。This references each row in the error file and provides error diagnostics. 纠正错误后即可加载数据。As soon as the errors have been corrected, the data can be loaded.
适用于: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1. SQL Server 2017 (14.x)SQL Server 2017 (14.x) 开始,error_file_path 可位于 Azure Blob 存储中。Beginning with SQL Server 2017 (14.x)SQL Server 2017 (14.x), the error_file_path can be in Azure blob storage.

'errorfile_data_source_name''errorfile_data_source_name'
适用于: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1. 是命名的外部数据源,指向错误文件的 Azure Blob 存储位置,该错误文件包含导入过程中发现的错误。Is a named external data source pointing to the Azure Blob storage location of the error file that will contain errors found during the import. 外部数据源必须使用 SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1 中添加的 TYPE = BLOB_STORAGE 选项创建。The external data source must be created using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1. 有关详细信息,请参阅 CREATE EXTERNAL DATA SOURCEFor more information, see CREATE EXTERNAL DATA SOURCE.

FIRSTROW =first_rowFIRSTROW =first_row
指定要加载的第一行的行号。Specifies the number of the first row to load. 默认值是指定数据文件中的第一行。The default is the first row in the specified data file. FIRSTROW 从 1 开始。FIRSTROW is 1-based.

备注

FIRSTROW 属性不可用于跳过列标题。The FIRSTROW attribute is not intended to skip column headers. BULK INSERT 语句不支持跳过标题。Skipping headers is not supported by the BULK INSERT statement. 跳过行时, SQL Server 数据库引擎SQL Server Database Engine只考虑字段终止符,而不会对所跳过行的字段中的数据进行验证。When skipping rows, the SQL Server 数据库引擎SQL Server Database Engine looks only at the field terminators, and does not validate the data in the fields of skipped rows.

FIRE_TRIGGERSFIRE_TRIGGERS
指定将在大容量导入操作期间执行目标表中定义的所有插入触发器。Specifies that any insert triggers defined on the destination table execute during the bulk-import operation. 如果为针对目标表的 INSERT 操作定义了触发器,则每次完成批处理操作时均激发触发器。If triggers are defined for INSERT operations on the target table, they are fired for every completed batch.

如果没有指定 FIRE_TRIGGERS,将不执行任何插入触发器。If FIRE_TRIGGERS is not specified, no insert triggers execute.

FORMATFILE_DATASOURCE = 'data_source_name'FORMATFILE_DATASOURCE = 'data_source_name'
适用范围:SQL Server 2017 (14.x)SQL Server 2017 (14.x) 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) 1.1.
命名的外部数据源,指向格式化文件(定义导入数据的架构)的 Azure Blob 存储位置。Is a named external data source pointing to the Azure Blob storage location of the format file that will define the schema of imported data. 外部数据源必须使用 SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1 中添加的 TYPE = BLOB_STORAGE 选项创建。The external data source must be created using the TYPE = BLOB_STORAGE option added in SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1. 有关详细信息,请参阅 CREATE EXTERNAL DATA SOURCEFor more information, see CREATE EXTERNAL DATA SOURCE.

KEEPIDENTITYKEEPIDENTITY
指定导入数据文件中的标识值用于标识列。Specifies that identity value or values in the imported data file are to be used for the identity column. 如果没有指定 KEEPIDENTITY,则此列的标识值可被验证但不能导入,并且 SQL ServerSQL Server 将根据创建表的过程中指定的种子值和增量值自动分配唯一值。If KEEPIDENTITY is not specified, the identity values for this column are verified but not imported and SQL ServerSQL Server automatically assigns unique values based on the seed and increment values specified during table creation. 如果数据文件不包含该表或视图中标识列的值,请使用格式化文件指定在导入数据时跳过表或视图中的标识列; SQL ServerSQL Server 会自动为该列分配唯一的值。If the data file does not contain values for the identity column in the table or view, use a format file to specify that the identity column in the table or view is to be skipped when importing data; SQL ServerSQL Server automatically assigns unique values for the column. 有关详细信息,请参阅 DBCC CHECKIDENT (Transact-SQL)For more information, see DBCC CHECKIDENT (Transact-SQL).

有关如何保留标识值的详细信息,请参阅批量导入数据时保留标识值 (SQL Server)For more information, see about keeping identify values see Keep Identity Values When Bulk Importing Data (SQL Server).

KEEPNULLSKEEPNULLS
指定空列在大容量导入操作期间应保留 Null 值,而不插入列的任何默认值。Specifies that empty columns should retain a null value during the bulk-import operation, instead of having any default values for the columns inserted. 有关详细信息,请参阅在批量导入期间保留 Null 或使用默认值 (SQL Server)For more information, see Keep Nulls or Use Default Values During Bulk Import (SQL Server).

KILOBYTES_PER_BATCH = kilobytes_per_batchKILOBYTES_PER_BATCH = kilobytes_per_batch
将每个批处理中数据的近似千字节数 (KB) 指定为 kilobytes_per_batch。Specifies the approximate number of kilobytes (KB) of data per batch as kilobytes_per_batch. 默认情况下,KILOBYTES_PER_BATCH 是未知的。By default, KILOBYTES_PER_BATCH is unknown. 有关性能注意事项的信息,请参阅本主题后面的“备注”。For information about performance considerations, see "Remarks," later in this topic.

LASTROW =last_rowLASTROW=last_row
指定要加载的最后一行的行号。Specifies the number of the last row to load. 默认值为 0,表示指定数据文件中的最后一行。The default is 0, which indicates the last row in the specified data file.

MAXERRORS = max_errorsMAXERRORS = max_errors
指定允许在数据中出现的最大语法错误数,超过该数量后将取消大容量导入操作。Specifies the maximum number of syntax errors allowed in the data before the bulk-import operation is canceled. 大容量导入操作无法导入的每一行都将被忽略并且计为一个错误。Each row that cannot be imported by the bulk-import operation is ignored and counted as one error. 如果未指定 max_errors,则默认值为 10。If max_errors is not specified, the default is 10.

备注

MAX_ERRORS 选项不适用于约束检查,也不适用于转换 moneybigint 数据类型。The MAX_ERRORS option does not apply to constraint checks or to converting money and bigint data types.

ORDER ( { column [ ASC | DESC ] } [ ,... n ] )ORDER ( { column [ ASC | DESC ] } [ ,... n ] )
指定如何对数据文件中的数据排序。Specifies how the data in the data file is sorted. 如果根据表中的聚集索引(如果有)对要导入的数据排序,则可提高批量导入的性能。Bulk import performance is improved if the data being imported is sorted according to the clustered index on the table, if any. 如果数据文件按不同于聚集索引键的顺序排序,或者该表没有聚集索引,则忽略 ORDER 子句。If the data file is sorted in a different order, that is other than the order of a clustered index key or if there is no clustered index on the table, the ORDER clause is ignored. 提供的列名必须是目标表中有效的列名。The column names supplied must be valid column names in the destination table. 默认情况下,大容量插入操作假设数据文件未排序。By default, the bulk insert operation assumes the data file is unordered. 对于经过优化的批量导入, SQL ServerSQL Server 还将验证导入的数据是否已排序。For optimized bulk import, SQL ServerSQL Server also validates that the imported data is sorted.

nn
指示可以指定多个列的占位符。Is a placeholder that indicates that multiple columns can be specified.

ROWS_PER_BATCH =rows_per_batchROWS_PER_BATCH =rows_per_batch
指示数据文件中近似的数据行数量。Indicates the approximate number of rows of data in the data file.

默认情况下,数据文件中所有的数据都作为单一事务发送到服务器,批处理中的行数对于查询优化器是未知的。By default, all the data in the data file is sent to the server as a single transaction, and the number of rows in the batch is unknown to the query optimizer. 如果指定了 ROWS_PER_BATCH(值 > 0),则服务器将使用此值优化大容量导入操作。If you specify ROWS_PER_BATCH (with a value > 0) the server uses this value to optimize the bulk-import operation. 为 ROWS_PER_BATCH 指定的值应当与实际行数大致相同。The value specified for ROWS_PER_BATCH should approximately the same as the actual number of rows. 有关性能注意事项的信息,请参阅本主题后面的“备注”。For information about performance considerations, see "Remarks," later in this topic.

TABLOCKTABLOCK
指定在大容量导入操作持续时间内获取一个表级锁。Specifies that a table-level lock is acquired for the duration of the bulk-import operation. 如果表没有索引并且指定了 TABLOCK,则该表可以同时由多个客户端加载。A table can be loaded concurrently by multiple clients if the table has no indexes and TABLOCK is specified. 默认情况下,锁定行为由表选项 table lock on bulk load决定。By default, locking behavior is determined by the table option table lock on bulk load. 通过在大容量导入操作期间保留锁,可减少对表争用锁的情况,有时可显著提高性能。Holding a lock for the duration of the bulk-import operation reduces lock contention on the table, in some cases can significantly improve performance. 有关性能注意事项的信息,请参阅本主题后面的“备注”。For information about performance considerations, see "Remarks," later in this topic.

对于列存储索引,For columnstore index. 锁定行为有所不同,因为它在内部划分为多个行集。the locking behaviour is different because it is internally divided into multiple rowsets. 每个线程通过在行集上使用 X 锁,允许使用并发数据加载会话并行加载数据,从而将数据以独占方式加载到每个行集中。Each thread loads data exclusively into each rowset by taking a X lock on the rowset allowing parallel data load with concurrent data load sessions. 使用 TABLOCK 选项会导致线程在表上使用 X 锁(与传统行集的 BU 锁不同),这会阻止其他并发线程并发加载数据。The use of TABLOCK option will cause thread to take an X lock on the table (unlike BU lock for traditional rowsets) which will prevent other concurrent threads to load data concurrently.

输入文件格式选项Input file format options

FORMAT = 'CSV'FORMAT = 'CSV'
适用于: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1.
指定符合 RFC 4180 标准的逗号分隔值文件。Specifies a comma separated values file compliant to the RFC 4180 standard.

FIELDQUOTE = 'field_quote'FIELDQUOTE = 'field_quote'
适用于: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1.
指定将用作 CSV 文件引号字符的字符。Specifies a character that will be used as the quote character in the CSV file. 如果未指定,根据 RFC 4180 标准中的定义,引号字符 (") 将用作引号字符。If not specified, the quote character (") will be used as the quote character as defined in the RFC 4180 standard.

FORMATFILE ='format_file_path'*FORMATFILE *='format_file_path'
指定格式化文件的完整路径。Specifies the full path of a format file. 描述数据文件的格式化文件,数据文件中包含存储的响应,而存储的响应则是使用 bcp 实用工具在同一表或视图中创建的。A format file describes the data file that contains stored responses created by using the bcp utility on the same table or view. 在下列情况下应使用格式化文件:The format file should be used if:

  • 数据文件包含的列多于或少于表或视图包含的列。The data file contains greater or fewer columns than the table or view.

  • 列的顺序不同。The columns are in a different order.

  • 列分隔符不同。The column delimiters vary.

  • 数据格式有其他更改。There are other changes in the data format. 格式化文件通常使用 bcp 实用工具创建,并可根据需要使用文本编辑器修改。Format files are typically created by using the bcp utility and modified with a text editor as needed. 有关详细信息,请参阅 bcp UtilityFor more information, see bcp Utility.

适用于: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1。Applies to: SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1.
SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1 开始,format_file_path 可位于 Azure Blob 存储中。Beginning with SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1, the format_file_path can be in Azure blob storage.

FIELDTERMINATOR ='field_terminator'*FIELDTERMINATOR *='field_terminator'
指定要用于 charwidechar 数据文件的字段终止符。Specifies the field terminator to be used for char and widechar data files. 默认字段终止符为 \t(制表符)。The default field terminator is \t (tab character). 有关详细信息,请参阅 指定字段终止符和行终止符 (SQL Server)For more information, see Specify Field and Row Terminators (SQL Server).

ROWTERMINATOR ='row_terminator'*ROWTERMINATOR *='row_terminator'
指定要用于 charwidechar 数据文件的行终止符。Specifies the row terminator to be used for char and widechar data files. 默认行终止符为 \r\n(换行符)。The default row terminator is \r\n (newline character). 有关详细信息,请参阅 指定字段终止符和行终止符 (SQL Server)For more information, see Specify Field and Row Terminators (SQL Server).

兼容性Compatibility

BULK INSERT 将对从文件中读取的数据执行严格的数据验证和数据检查,在对无效数据执行这样的验证和检查时,可能导致现有脚本失败。BULK INSERT enforces strict data validation and data checks of data read from a file that could cause existing scripts to fail when they are executed on invalid data. 例如,BULK INSERT 验证:For example, BULK INSERT verifies that:

  • floatreal 数据类型的本机表示形式是否有效。The native representations of float or real data types are valid.

  • Unicode 数据的字节数是否为偶数。Unicode data has an even-byte length.

数据类型Data Types

字符串到小数的数据类型转换String-to-Decimal Data Type Conversions

BULK INSERT 中使用的字符串到小数的数据类型转换与 Transact-SQLTransact-SQL CONVERT 函数遵循相同的规则,该函数拒绝使用科学记数法表示数值的字符串。The string-to-decimal data type conversions used in BULK INSERT follow the same rules as the Transact-SQLTransact-SQL CONVERT function, which rejects strings representing numeric values that use scientific notation. 因此,BULK INSERT 将此类字符串视为无效值并报告转换错误。Therefore, BULK INSERT treats such strings as invalid values and reports conversion errors.

若要绕过此行为,可使用格式化文件将以科学记数法表示的 float 数据批量导入小数列中。To work around this behavior, use a format file to bulk import scientific notation float data into a decimal column. 在格式化文件中,请将列显式描述为 realfloat 数据。In the format file, explicitly describe the column as real or float data. 有关这些数据类型的详细信息,请参阅 float 和 real (Transact-SQL)For more information about these data types, see float and real (Transact-SQL).

备注

格式化文件将 real 数据表示为 SQLFLT4 数据类型,将 float 数据表示为 SQLFLT8 数据类型。Format files represent real data as the SQLFLT4 data type and float data as the SQLFLT8 data type. 有关非 XML 格式化文件的详细信息,请参阅使用 bcp 指定文件存储类型 (SQL Server)For information about non-XML format files, see Specify File Storage Type by Using bcp (SQL Server).

导入使用科学记数法的数值的示例Example of Importing a Numeric Value that Uses Scientific Notation

此示例使用下表:This example uses the following table:

CREATE TABLE t_float(c1 float, c2 decimal (5,4));  

用户要将数据大容量导入 t_float 表中。The user wants to bulk import data into the t_float table. 数据文件 C:\t_float-c.dat 包含以科学记数法表示的 float 数据,例如:The data file, C:\t_float-c.dat, contains scientific notation float data; for example:

8.0000000000000002E-28.0000000000000002E-2  

但是,BULK INSERT 无法将此数据直接导入 t_float,原因是其第二个列 c2 使用的是 decimal 数据类型。However, BULK INSERT cannot import this data directly into t_float, because its second column, c2, uses the decimal data type. 因此,必须使用格式化文件。Therefore, a format file is necessary. 格式化文件必须将以科学记数法表示的 float 数据映射到列 c2 的小数格式。The format file must map the scientific notation float data to the decimal format of column c2.

以下格式化文件使用 SQLFLT8 数据类型将第二个数据字段映射到第二列:The following format file uses the SQLFLT8 data type to map the second data field to the second column:

<?xml version="1.0"?> 
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
<RECORD> 
<FIELD ID="1" xsi:type="CharTerm" TERMINATOR="\t" MAX_LENGTH="30"/> 
<FIELD ID="2" xsi:type="CharTerm" TERMINATOR="\r\n" MAX_LENGTH="30"/>  </RECORD>  <ROW> 
<COLUMN SOURCE="1" NAME="c1" xsi:type="SQLFLT8"/> 
<COLUMN SOURCE="2" NAME="c2" xsi:type="SQLFLT8"/>  </ROW> </BCPFORMAT> 

若要使用此格式化文件(使用文件名 C:\t_floatformat-c-xml.xml)将测试数据导入测试表中,请发出下列 Transact-SQLTransact-SQL 语句:To use this format file (using the file name C:\t_floatformat-c-xml.xml) to import the test data into the test table, issue the following Transact-SQLTransact-SQL statement:

BULK INSERT bulktest..t_float  
FROM 'C:\t_float-c.dat' WITH (FORMATFILE='C:\t_floatformat-c-xml.xml');  
GO  

用于大容量导出或导入 SQLXML 文档的数据类型Data Types for Bulk Exporting or Importing SQLXML Documents

若要大容量导出或导入 SQLXML 数据,请在格式化文件中使用下列数据类型之一:To bulk export or import SQLXML data, use one of the following data types in your format file:

数据类型Data type 效果Effect
SQLCHAR 或 SQLVARCHARSQLCHAR or SQLVARCHAR 在客户端代码页或排序规则隐含的代码页中发送数据。The data is sent in the client code page or in the code page implied by the collation). 效果与指定 DATAFILETYPE ='char' 而不指定格式化文件相同。The effect is the same as specifying the DATAFILETYPE ='char' without specifying a format file.
SQLNCHAR 或 SQLNVARCHARSQLNCHAR or SQLNVARCHAR 以 Unicode 格式发送数据。The data is sent as Unicode. 效果与指定 DATAFILETYPE = 'widechar' 而不指定格式化文件相同。The effect is the same as specifying the DATAFILETYPE = 'widechar' without specifying a format file.
SQLBINARY 或 SQLVARBINSQLBINARY or SQLVARBIN 不经任何转换即发送数据。The data is sent without any conversion.

一般备注General Remarks

有关 BULK INSERT 语句、INSERT ...SELECT * FROM OPENROWSET(BULK...) 语句和 bcp 命令,请参阅批量导入和导出数据 (SQL Server)For a comparison of the BULK INSERT statement, the INSERT ... SELECT * FROM OPENROWSET(BULK...) statement, and the bcp command, see Bulk Import and Export of Data (SQL Server).

有关准备数据以进行批量导入的信息,请参阅准备用于批量导出或导入的数据 (SQL Server)For information about preparing data for bulk import, see Prepare Data for Bulk Export or Import (SQL Server).

BULK INSERT 语句可在用户定义的事务内执行,以便将数据导入到表或视图中。The BULK INSERT statement can be executed within a user-defined transaction to import data into a table or view. 或者,为了将多个匹配项用于大容量导入数据,事务可以在 BULK INSERT 语句中指定 BATCHSIZE 子句。Optionally, to use multiple matches for bulk importing data, a transaction can specify the BATCHSIZE clause in the BULK INSERT statement. 如果回滚某一多批处理事务,则回滚该事务已发送到 SQL ServerSQL Server 的每个批处理。If a multiple-batch transaction is rolled back, every batch that the transaction has sent to SQL ServerSQL Server is rolled back.

互操作性Interoperability

从 CSV 文件导入数据Importing Data from a CSV file

SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1 开始,BULK INSERT 支持 CSV 格式。Beginning with SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1, BULK INSERT supports the CSV format.
SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1 之前, SQL ServerSQL Server 批量导入操作不支持逗号分隔值 (CSV) 文件。Before SQL Server 2017 (14.x)SQL Server 2017 (14.x) CTP 1.1, comma-separated value (CSV) files are not supported by SQL ServerSQL Server bulk-import operations. 但是,在某些情况下,CSV 文件可在将数据大容量导入 SQL ServerSQL Server时用作数据文件。However, in some cases, a CSV file can be used as the data file for a bulk import of data into SQL ServerSQL Server. 有关从 CSV 数据文件导入数据的要求,请参阅准备用于批量导出或导入的数据 (SQL Server)For information about the requirements for importing data from a CSV data file, see Prepare Data for Bulk Export or Import (SQL Server).

日志记录行为Logging Behavior

有关何时在事务日志中记录由批量导入执行的行插入操作的信息,请参阅《Prerequisites for Minimal Logging in Bulk Import》(批量导入的最小日志记录的先决条件)。For information about when row-insert operations that are performed by bulk import are logged in the transaction log, see Prerequisites for Minimal Logging in Bulk Import.

限制Restrictions

将格式文件用于 BULK INSERT 时,最多只能指定 1024 个字段。When using a format file with BULK INSERT, you can specify up to 1024 fields only. 这与表中允许的最大列数相同。This is same as the maximum number of columns allowed in a table. 如果将 BULK INSERT 与包含 1024 个字段以上的数据文件一起使用,BULK INSERT 将生成 4822 错误。If you use BULK INSERT with a data file that contains more than 1024 fields, BULK INSERT generates the 4822 error. bcp 实用工具没有此限制,因此,对于包含 1024 个以上字段的数据文件,请使用 bcp 命令。The bcp utility does not have this limitation, so for data files that contain more than 1024 fields, use the bcp command.

性能注意事项Performance Considerations

如果要在单次批处理中刷新的页数超过了内部阈值,则可能会对缓冲池执行完全扫描,以识别要在批处理提交时刷新的页面。If the number of pages to be flushed in a single batch exceeds an internal threshold, a full scan of the buffer pool might occur to identify which pages to flush when the batch commits. 此完全扫描可能会降低大容量导入操作的性能。This full scan can hurt bulk-import performance. 在将大型缓冲池与较慢的 I/O 子系统结合使用时,就可能出现超过内部阈值的情况。A likely case of exceeding the internal threshold occurs when a large buffer pool is combined with a slow I/O subsystem. 若要避免大型机上的缓冲区溢出,请不要使用 TABLOCK 提示(将删除大容量优化),也不要使用较小的批大小(将保留大容量优化)。To avoid buffer overflows on large machines, either do not use the TABLOCK hint (which will remove the bulk optimizations) or use a smaller batch size (which preserves the bulk optimizations).

由于计算机千差万别,因此我们建议在数据加载过程中测试各种批大小,以确定最佳方案。Because computers vary, we recommend that you test various batch sizes with your data load to find out what works best for you.

SecuritySecurity

安全帐户委托(模拟)Security Account Delegation (Impersonation)

如果用户使用的是 SQL ServerSQL Server 登录名,则系统将使用 SQL ServerSQL Server 进程帐户的安全配置文件。If a user uses a SQL ServerSQL Server login, the security profile of the SQL ServerSQL Server process account is used. 使用 SQL Server 身份验证的登录名不能在数据库引擎外部进行身份验证。A login using SQL Server authentication cannot be authenticated outside of the Database Engine. 因此,当 BULK INSERT 命令由使用 SQL Server 身份验证的登录名启动时,使用 SQL Server 进程帐户(SQL Server 数据库引擎服务使用的帐户)的安全上下文建立到数据的连接。Therefore, when a BULK INSERT command is initiated by a login using SQL Server authentication, the connection to the data is made using the security context of the SQL Server process account (the account used by the SQL Server Database Engine service). 要成功读取源数据,您必须授予 SQL Server 数据库引擎使用的帐户访问源数据的权限。与此相反,如果 SQL ServerSQL Server 用户使用 Windows 身份验证登录,则该用户只能读取用户帐户可以访问的那些文件,而不考虑 SQL ServerSQL Server 进程的安全配置文件。To successfully read the source data you must grant the account used by the SQL Server Database Engine, access to the source data.In contrast, if a SQL ServerSQL Server user logs on by using Windows Authentication, the user can read only those files that can be accessed by the user account, regardless of the security profile of the SQL ServerSQL Server process.

如果在一台计算机上使用 sqlcmdosql 执行 BULK INSERT 语句,将数据插入第二台计算机上的 SQL ServerSQL Server,同时使用 UNC 路径在第三台计算机上指定 data_file,可能会收到 4861 错误。When executing the BULK INSERT statement by using sqlcmd or osql, from one computer, inserting data into SQL ServerSQL Server on a second computer, and specifying a data_file on third computer by using a UNC path, you may receive a 4861 error.

若要解决此问题,请使用 SQL ServerSQL Server 身份验证并指定一个使用 SQL ServerSQL Server 进程帐户安全配置文件的 SQL ServerSQL Server 登录名,或者配置 Windows 以启用安全帐户委托。To resolve this error, use SQL ServerSQL Server Authentication and specify a SQL ServerSQL Server login that uses the security profile of the SQL ServerSQL Server process account, or configure Windows to enable security account delegation. 有关如何使用户帐户可信以进行委托的信息,请参阅 Windows 帮助。For information about how to enable a user account to be trusted for delegation, see Windows Help.

有关详细信息和使用 BULK INSERT 的安全注意事项,请参阅使用 BULK INSERT 或 OPENROWSET (BULK...) 导入批量数据 (SQL Server)For more information about this and other security considerations for using BULK INSERT, see Import Bulk Data by Using BULK INSERT or OPENROWSET(BULK...) (SQL Server).

权限Permissions

需要 INSERT 和 ADMINISTER BULK OPERATIONS 权限。Requires INSERT and ADMINISTER BULK OPERATIONS permissions. 在 Azure SQL 数据库中,需要 INSERT 和 ADMINISTER DATABASE BULK OPERATIONS 权限。In Azure SQL Database, INSERT and ADMINISTER DATABASE BULK OPERATIONS permissions are required. 另外,如果存在下列一种或多种情况,则还需要 ALTER TABLE 权限:Additionally, ALTER TABLE permission is required if one or more of the following is true:

  • 存在约束但未指定 CHECK_CONSTRAINTS 选项。Constraints exist and the CHECK_CONSTRAINTS option is not specified.

    备注

    禁用约束是默认行为。Disabling constraints is the default behavior. 若要显式检查约束,请使用 CHECK_CONSTRAINTS 选项。To check constraints explicitly, use the CHECK_CONSTRAINTS option.

  • 存在触发器但未指定 FIRE_TRIGGER 选项。Triggers exist and the FIRE_TRIGGER option is not specified.

    备注

    默认情况下,不激发触发器。By default, triggers are not fired. 若要显式激发触发器,请使用 FIRE_TRIGGER 选项。To fire triggers explicitly, use the FIRE_TRIGGER option.

  • 使用 KEEPIDENTITY 选项可以从数据文件中导入标识值。You use the KEEPIDENTITY option to import identity value from data file.

示例Examples

A.A. 使用竖线从文件导入数据Using pipes to import data from a file

下面的示例使用竖线 (AdventureWorks2012.Sales.SalesOrderDetail) 作为字段终止符,并使用 | 作为行终止符,将订单详细信息从指定的数据文件导入 |\n 表中。The following example imports order detail information into the AdventureWorks2012.Sales.SalesOrderDetail table from the specified data file by using a pipe (|) as the field terminator and |\n as the row terminator.

BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail  
   FROM 'f:\orders\lineitem.tbl'  
   WITH   
      (  
         FIELDTERMINATOR =' |',  
         ROWTERMINATOR =' |\n'  
      );  

B.B. 使用 FIRE_TRIGGER 参数Using the FIRE_TRIGGERS argument

下面的示例指定 FIRE_TRIGGERS 参数。The following example specifies the FIRE_TRIGGERS argument.

BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail  
   FROM 'f:\orders\lineitem.tbl'  
   WITH  
     (  
        FIELDTERMINATOR =' |',  
        ROWTERMINATOR = ':\n',  
        FIRE_TRIGGERS  
      );  

C.C. 使用换行符作为行终止符Using line feed as a row terminator

下面的示例将导入使用换行符作为行终止符的文件(如 UNIX 输出):The following example imports a file that uses the line feed as a row terminator such as a UNIX output:

DECLARE @bulk_cmd varchar(1000);  
SET @bulk_cmd = 'BULK INSERT AdventureWorks2012.Sales.SalesOrderDetail  
FROM ''<drive>:\<path>\<filename>''   
WITH (ROWTERMINATOR = '''+CHAR(10)+''')';  
EXEC(@bulk_cmd);  

备注

由于 Microsoft Windows 处理文本文件的方式,(\n 自动替换为 \r\n)。Due to how Microsoft Windows treats text files (\n automatically gets replaced with \r\n).

D.D. 指定代码页Specifying a code page

以下示例显示如何指定代码页。The following example show how to specify a code page.

BULK INSERT MyTable  
FROM 'D:\data.csv'  
WITH  
( CODEPAGE = '65001',  
    DATAFILETYPE = 'char',  
    FIELDTERMINATOR = ','  
);  

E.E. 从 CSV 文件导入数据Importing data from a CSV file

以下示例显示如何指定 CSV 文件。The following example show how to specify a CSV file.

BULK INSERT Sales.Invoices
FROM '\\share\invoices\inv-2016-07-25.csv'
WITH (FORMAT = 'CSV'); 

F.F. 从 Azure Blob 存储中的文件导入数据Importing data from a file in Azure blob storage

以下示例显示如何从 Azure Blob 存储位置(已配置为外部数据源)中的 CSV 文件加载数据。The following example shows how to load data from a csv file in an Azure blob storage location, which has been configured as an external data source. 这要求提供使用共享访问签名的数据库作用域凭据。This requires a database scoped credential using a shared access signature.

BULK INSERT Sales.Invoices
FROM 'inv-2017-01-19.csv'
WITH (DATA_SOURCE = 'MyAzureInvoices',
     FORMAT = 'CSV'); 

G.G. 从 Azure Blob 存储中的文件导入数据并指定错误文件Importing data from a file in Azure blob storage and specifying an error file

以下示例显示如何从 Azure Blob 存储位置(已配置为外部数据源且指定错误文件)中的 CSV 文件加载数据。The following example shows how to load data from a csv file in an Azure blob storage location, which has been configured as an external data source and also specifying an error file. 这要求提供使用共享访问签名的数据库作用域凭据。This requires a database scoped credential using a shared access signature. 请注意,如果在 Azure SQL 数据库上运行,则 ERRORFILE 选项应带有 ERRORFILE_DATA_SOURCE,否则导入可能会失败,出现权限错误。Note that if running on Azure SQL Database, ERRORFILE option should be accompanied by ERRORFILE_DATA_SOURCE otherwise the import might fail with permissions error. ERRORFILE 中指定的文件不应存在于容器中。The file specified in ERRORFILE should not exist in the container.

BULK INSERT Sales.Invoices
FROM 'inv-2017-01-19.csv'
WITH (DATA_SOURCE = 'MyAzureInvoices',
     FORMAT = 'CSV',
     ERRORFILE = 'MyErrorFile',
     ERRORFILE_DATA_SOURCE = 'MyAzureInvoices'); 

有关完整的 BULK INSERT 示例(包括配置凭据和外部数据源),请参阅有关批量访问 Azure Blob 存储中数据的示例For complete BULK INSERT examples including configuring the credential and external data source, see Examples of Bulk Access to Data in Azure Blob Storage.

其他示例Additional Examples

以下主题提供了其他 BULK INSERT 示例:Other BULK INSERT examples are provided in the following topics:

另请参阅See Also

批量导入和导出数据 (SQL Server) Bulk Import and Export of Data (SQL Server)
bcp 实用工具 bcp Utility
用来导入或导出数据的格式化文件 (SQL Server) Format Files for Importing or Exporting Data (SQL Server)
INSERT (Transact-SQL) INSERT (Transact-SQL)
OPENROWSET (Transact-SQL) OPENROWSET (Transact-SQL)
准备用于批量导出或导入的数据 (SQL Server) Prepare Data for Bulk Export or Import (SQL Server)
sp_tableoption (Transact-SQL)sp_tableoption (Transact-SQL)