In ADF, is there a way that we can have the spark cluster for data flow not to go down at all (can we set TTL to forever). So that we can use that one cluster for multiple dataflow even if the next dataflow happens after 4 hours. We see the option for TTL only up to 4 hours, is there a way to set it more than that.
Any thoughts?