You need to start by configuring a Get Metadata Activity to list all files in the blob folder, selecting Child Items
in the field list.
Next, use a ForEach Activity to iterate over the files, setting the Items
property to @activity('GetMetadataActivity').output.childItems
.
Create a parameter named FilePath
in the dataset pointing to the blob storage and adjust the file path to utilize this parameter (e.g., @dataset().FilePath
).
Inside the ForEach activity, add a Copy Data Activity and configure its source to use the parameterized dataset, passing @item().name
to the FilePath
parameter.
Enable Power Query within the Copy Data activity Source tab and design the necessary transformations.
Finally, configure the sink to point to your Azure SQL Database, ensuring proper column mapping. This setup allows each file to be processed through Power Query and loaded into the Azure SQL Database without creating multiple datasets, streamlining the data integration process.
Here is how your pipeline JSON might look for the Copy Data activity:
{
"name": "CopyDataActivity",
"type": "Copy",
"inputs": [
{
"name": "ParameterizedBlobDataset",
"parameters": {
"FilePath": "@item().name"
}
}
],
"outputs": [
{
"name": "AzureSqlDatabaseDataset"
}
],
"typeProperties": {
"source": {
"type": "JsonSource",
"storeSettings": {
"type": "AzureBlobStorageReadSettings"
}
},
"sink": {
"type": "SqlSink"
}
}
}