I am copying data from an API into Parquet file and then to the data warehouse. One of the columns in a table ( I cannot figure out which one) is causing "Unexpected error encountered filling record reader buffer: HadoopSqlException: String or binary data would be truncated" error . I have done varchar(max) for all destination columns just for testing, yet it still fails. I would happily cut the data to fit the max allowed column width if I could figure out how to do that on copy from API and if I could find which column is actually causing the problem. How can I make it work? What are my options here? Thank you, Hanna
