Currently our Spark job runs result in a number of folders with random guid names being created in the root directory of the container we use as our HDInsight cluster storage. This seems to be the folder in the context of which the job runs, it has a copy of the script being used. Is there a way to specify a folder within which these guid folders for job state can be created?