灵活的文件目标Flexible File Destination

适用于:Applies to: 是SQL ServerSQL Server(所有支持的版本)yesSQL ServerSQL Server (all supported versions) 是 Azure 数据工厂中的 SSIS Integration RuntimeSSIS Integration Runtime in Azure Data Factoryyes Azure 数据工厂中的 SSIS Integration RuntimeSSIS Integration Runtime in Azure Data Factory适用于:Applies to: 是SQL ServerSQL Server(所有支持的版本)yesSQL ServerSQL Server (all supported versions) 是 Azure 数据工厂中的 SSIS Integration RuntimeSSIS Integration Runtime in Azure Data Factoryyes Azure 数据工厂中的 SSIS Integration RuntimeSSIS Integration Runtime in Azure Data Factory

通过“灵活的文件目标”组件,SSIS 包可将数据写入到各种受支持的存储服务中。The Flexible File Destination component enables an SSIS package to write data to various supported storage services.

当前支持的存储服务为Currently supported storage services are

将“灵活的文件目标”拖放到数据流设计器中,然后双击它来打开编辑器。Drag-drop Flexible File Destination to the data flow designer and double-click it to see the editor.

“灵活的文件目标”是适用于 Azure 的 SQL Server Integration Services (SSIS) 功能包的组成部分。The Flexible File Destination is a component of the SQL Server Integration Services (SSIS) Feature Pack for Azure.

“灵活的文件目标编辑器”上提供了以下属性。Following properties are available on the Flexible File Destination Editor.

  • 文件连接管理器类型: 指定源连接管理器类型。File Connection Manager Type: Specifies the source connection manager type. 然后,从指定的类型中选择一个现成的或新建一个。Then choose an existing one of the specified type or create a new one.
  • 文件夹路径: 指定目标文件夹路径。Folder Path: Specifies the destination folder path.
  • 文件名: 指定目标文件名称。File Name: Specifies the destination file name.
  • 文件格式: 指定目标文件格式。File Format: Specifies the destination file format. 支持格式为 Text、Avro、ORC 和 Parquet。Supported formats are Text, Avro, ORC, Parquet. ORC/Parquet 需要 Java。Java is required for ORC/Parquet. 请参阅此处了解详细信息。See here for details.
  • 列分隔符字符: 指定要用作列分隔符的字符(不支持多字符分隔符)。Column delimiter character: Specifies the character to use as column delimiter (multi-character delimiters are not supported).
  • 作为列名称的第一行: 指定是否将列名称写入第一行。First row as the column name: Specifies whether to write column names to first row.
  • 压缩文件: 指定是否压缩文件。Compress the file: Specifies whether to compress the file.
  • 压缩类型: 指定要使用的压缩格式。Compression Type: Specifies the compression format to use. 支持格式为 GZIP、DEFLATE 和 BZIP2。Supported formats are GZIP, DEFLATE, BZIP2.
  • 压缩级别: 指定要使用的压缩级别。Compression Level: Specifies the compression level to use.

“高级编辑器”上提供了以下属性。Following properties are available on the Advanced Editor.

  • rowDelimiter: 用于分隔文件中的行的字符。rowDelimiter: The character used to separate rows in a file. 只能使用一个字符。Only one character is allowed. 默认值为 \r\n。The default value is \r\n.
  • escapeChar: 用于转义输入文件内容中的列分隔符的特殊字符。escapeChar: The special character used to escape a column delimiter in the content of input file. 不能同时指定表的 escapeChar 和 quoteChar。You cannot specify both escapeChar and quoteChar for a table. 只能使用一个字符。Only one character is allowed. 没有默认值。No default value.
  • quoteChar: 将字符串值用引号括起来的字符。quoteChar: The character used to quote a string value. 引号字符内的列和行分隔符将被视为字符串值的一部分。The column and row delimiters inside the quote characters would be treated as part of the string value. 此属性适用于输入和输出数据集。This property is applicable to both input and output datasets. 不能同时指定表的 escapeChar 和 quoteChar。You cannot specify both escapeChar and quoteChar for a table. 只能使用一个字符。Only one character is allowed. 没有默认值。No default value.
  • nullValue: 用于表示 null 值的一个或多个字符。nullValue: One or more characters used to represent a null value. 默认值为 \N。The default value is \N.
  • encodingName: 指定编码名称。encodingName: Specify the encoding name. 请参阅 Encoding.EncodingName 属性。See Encoding.EncodingName Property.
  • skipLineCount: 指示从输入文件读取数据时要跳过的非空行数。skipLineCount: Indicates the number of non-empty rows to skip when reading data from input files. 如果同时指定了 skipLineCount 和 firstRowAsHeader,则先跳过行,然后从输入文件读取标头信息。If both skipLineCount and firstRowAsHeader are specified, the lines are skipped first and then the header information is read from the input file.
  • treatEmptyAsNull: 指定是否在从输入文件读取数据时将 null 或空字符串视为 null 值。treatEmptyAsNull: Specifies whether to treat null or empty string as a null value when reading data from an input file. 默认值为 True。The default value is True.

指定连接信息后,切换到“列” 页,将源列映射到 SSIS 数据流的目标列。After specifying the connection information, switch to the Columns page to map source columns to destination columns for the SSIS data flow.

有关服务主体权限配置的说明Notes on Service Principal Permission Configuration

要使“测试连接”起作用(Blob 存储或 Data Lake Storage Gen2),应向服务主体分配至少存储帐户的“存储 Blob 数据读取器”角色 。For Test Connection to work (either blob storage or Data Lake Storage Gen2), the service principal should be assigned at least Storage Blob Data Reader role to the storage account. 可通过 RBAC 实现。This is done with RBAC.

对于 Blob 存储,通过分配至少“存储 Blob 数据读参与者”角色来授予写入权限。For blob storage, write permission is granted by assigning at least Storage Blob Data Contributor role.

对于 Data Lake Storage Gen2,权限由 RBAC 和 ACL 共同决定。For Data Lake Storage Gen2, permission is determined by both RBAC and ACLs. 请注意,ACL 使用用于注册应用的服务主体对象 ID (OID) 进行配置,如此处所述。Pay attention that ACLs are configured using the Object ID (OID) of the service principal for the app registration as detailed here. 这与用于 RBAC 配置的应用程序(客户端)ID 有所不同。This is different from the Application (client) ID that is used with RBAC configuration. 通过内置角色或自定义角色向安全主体授予 RBAC 数据权限时,将首先根据请求的授权来评估这些权限。When a security principal is granted RBAC data permissions through a built-in role, or through a custom role, these permissions are evaluated first upon authorization of a request. 如果请求的操作已获得安全主体的 RBAC 分配的授权,则授权会立即得到解决,且不会执行任何其他 ACL 检查。If the requested operation is authorized by the security principal's RBAC assignments, then authorization is immediately resolved and no additional ACL checks are performed. 或者,如果安全主体没有 RBAC 分配,或请求的操作与分配的权限不匹配,则会执行 ACL 检查来确定是否已授权安全主体执行请求的操作。Alternatively, if the security principal does not have an RBAC assignment, or the request's operation does not match the assigned permission, then ACL checks are performed to determine if the security principal is authorized to perform the requested operation. 对于写入权限,请从接收器文件系统开始授予至少“执行”权限,并授予接收器文件夹的“写入”权限 。For write permission, grant at least Execute permission starting from the sink file system, along with Write permission for the sink folder. 或者,通过 RBAC 授予至少“存储 Blob 数据参与者”角色 。Alternatively, grant at least the Storage Blob Data Contributor role with RBAC. 有关详细信息,请参阅文章。See this article for details.