SatyaD-1257 avatar image
0 Votes"
SatyaD-1257 asked ·

Azure Data Factory extracting Activities from Dynamics 365

I have a ADF extracting 'activity' (activitypointer) from Dynamics 365 Online. The ADF runs for 60+ minutes and fails with the below error for the copy activity. The sink is a gen2 data lake. I have 'AutoResolveIntegrationRuntime' for this data factory. I can extract some other entities like 'Accounts' from CRM without issues. Appreciate the help to fix the issue. I have cross checked some other posts related to this type of ADF errors and this seems to be a different so creating a new thread.

Team, I don't find the related tags to associate this question to Azure Data Factory team and I tried different combination with no luck. Please add the necessary tags if you want the questions to be tagged correctly.

"errorCode": "2200",
"message": "Failure happened on 'Sink' side. ErrorCode=ParquetJavaInvocationException,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred when invoking java, message: java.lang.OutOfMemoryError:Direct buffer memory\ntotal entry:19\r\njava.nio.Bits.reserveMemory(\r\njava.nio.DirectByteBuffer.<init>(\r\njava.nio.ByteBuffer.allocateDirect(\r\norg.apache.parquet.hadoop.codec.SnappyCompressor.setInput(\r\norg.apache.parquet.hadoop.codec.NonBlockedCompressorStream.write(\r\norg.apache.parquet.bytes.CapacityByteArrayOutputStream.writeToOutput(\r\norg.apache.parquet.bytes.CapacityByteArrayOutputStream.writeTo(\r\norg.apache.parquet.bytes.BytesInput$CapacityBAOSBytesInput.writeAllTo(\r\norg.apache.parquet.bytes.BytesInput$SequenceBytesIn.writeAllTo(\r\norg.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(\r\norg.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(\r\norg.apache.parquet.column.impl.ColumnWriterV1.writePage(\r\norg.apache.parquet.column.impl.ColumnWriterV1.flush(\r\norg.apache.parquet.column.impl.ColumnWriteStoreV1.flush(\r\norg.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(\r\norg.apache.parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(\r\norg.apache.parquet.hadoop.InternalParquetRecordWriter.write(\r\norg.apache.parquet.hadoop.ParquetWriter.write(\r\\r\n.,Source=Microsoft.DataTransfer.Richfile.ParquetTransferPlugin,''Type=Microsoft.DataTransfer.Richfile.JniExt.JavaBridgeException,Message=,Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge,'",
"failureType": "UserError",
"target": "Copy Activities",
"details": []

· 1
10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

I am able to update the tag to 'Azure Data Factory'.

0 Votes 0 · ·

1 Answer

SatyaD-1257 avatar image
0 Votes"
SatyaD-1257 answered ·

With an update to the Sink side compression from 'Snappy' to 'None', I was able to run the Copy Activity successfully. I still don't understand the relation between the compression and out of memory error the copy activity was throwing. Hope it helps someone else having a similar type of issue.

· Share
10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.