I have a Synapse workspace running a bunch of pipelines every night. The pipelines are starting notebooks which use a spark pool. Since April 1st, all the pipelines failed with this error message:
"message": "Operation on target failed: Activity failed because an inner activity failed; Inner activity name: , Error: Exception: Failed to create Livy session for executing notebook. Error: Your Spark job requested 12 vcores. However, the workspace has a 0 core limit. Try reducing the numbers of vcores requested or increasing your vcore quota. HTTP status code: 400."
These pipelines have been running fine since I scheduled them 3 weeks ago. The workspace suddenly have a 0 core limit. Anyone experienced anything similar? I can't start dataflow debug clusters anymore either. They fail with message: "Failed to setup debug session. Debug session is already terminated."
Seems like there is a problem between Synapse and spark.