question

STAKV-1497 avatar image
0 Votes"
STAKV-1497 asked JigneshRathod-3679 commented

ADF Data Flows, HadoopSqlException: Arithmetic overflow error converting expression to data type NVARCHAR at sink

Hello,

I got the following error message thrown by a pipeline containing a data flow activity.

Operation on target df_stg_json_to_synapse_seedesadv failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'ItemSeedesadv': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopSqlException: Arithmetic overflow error converting expression to data type NVARCHAR.","Details":"at Sink 'ItemSeedesadv': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopSqlException: Arithmetic overflow error converting expression to data type NVARCHAR."}

The target table does not contain any NVARCHAR columns anymore. The dataset schema has been refreshed.

The target table structure is:

CREATE TABLE [stg].[ItemSeedesadv]
(
[File_Name] [varchar](500) NULL,
[Me_MessageNumber] [varchar](100) NULL,
[ItemPos_LineItemNumber] [varchar](500) NULL,
[ItemPos_LineSubItemNumber] [varchar](500) NULL,
[ItemPos_CountryOfOrigin] [varchar](500) NULL,
[ItemPos_ItemDescription] [varchar](max) NULL,
[Item_ItemNumber] [varchar](max) NULL,
[Item_Qty] [varchar](max) NULL,
[Item_AmountDetails] [varchar](max) NULL,
[Item_Ref] [varchar](max) NULL,
[Item_Date] [varchar](max) NULL,
[Item_Add] [varchar](max) NULL,
[Item_WeightAndVolume] [varchar](max) NULL,
[Item_Txt] [varchar](max) NULL,
[Item_DangerousGoods] [varchar](max) NULL,
[Item_Pac_Package] [varchar](max) NULL,
[Item_Pac_Pallet] [varchar](max) NULL
)
WITH
(
DISTRIBUTION = ROUND_ROBIN,
HEAP
)
GO

Pipeline Run ID: 03afc9e9-c156-4d86-bdc7-14c0a4ac095c
Data Flow Run ID: c0150967-1b45-4707-9f19-370d2bf3d529

Could you please investigate and advise?

azure-data-factoryazure-synapse-analytics
· 4
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

@stakv-1497 Thanks for using Microsoft Q&A !!
You could get this overflow error message if the table has more than one records with very long TEXT then implicit cast gets applied to convert this text to nvarchar which generates this arithmetic overflow error. Can you please check if the any column has long text. Also, alternatively I suggest to open an Azure support case for this issue as someone has to look into your environment and data to investigate it further.

0 Votes 0 ·

Thank you for the quick response, SaurabhSharma-msft.

Indeed, there could be long TEXT coming from the source files as they are complex hierarchical JSONs.

We have opened an Azure support case for further investigation.

I am going to post an update in case we find anything helpful and worth sharing.

1 Vote 1 ·

@stakv-1497 sure, thanks for the updates.

0 Votes 0 ·

@stakv-1497 @SaurabhSharma-msft
Hi, I am facing similar issue while trying to read some columns from a Parquet using external table in Synapse. I have also found the column which is having very long text and so i tried to reduce its length to 4000 in Databricks while creating the parquet. But still it is giving below error:
"HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopSqlException: Arithmetic overflow error converting expression to data type NVARCHAR"

Any suggestion would be much appreciated..
Thank you!

0 Votes 0 ·

0 Answers