Below is the json structure of the output file. The source data is coming from a flat file. How to achive this in mapping dataflow of a ADF pipeline.
I created a sub columns under data like empNumber, changeticket, OrderAssociated, function, History and again created sub columns under them respectively.
In a aggregate transformation, if i do group by id, time, count and aggregae collect(data) then data is coming as a array. But in this scenario data should
come as an object and changeticket, OrderAssociated, function, History should come as an array. Not sure can we use 2 aggegate under the same mapping data flow.
{
"id": "2eac205b",
"time": "2021-11-10T08:19:22.111Z",
"count": 1,
"data": {
"empNumber": 12345,
"changeticket": [
{
"Developer": "XYZ",
"CreateDate": "20211011T081026Z",
"ChangeNumber": 12345,
"RequiredLocation": {
"LocationTags": "XXXXX",
"description": "Testing"
}
}
],
"OrderAssociated": [
{
"ChangeNumber": 12345,
"OrderNumber": null,
"Unit": "XXX XXX",
}
],
"function": [],
"History": [
{
"change": "SAVED AS DRAFT",
"comments": "Draft",
"userName": "PSP",
"status": "NEW",
}
],
}
}