Is there a way to create a Spark dataframe in Scala command, and then access it in Python, without explicitly writing it to disk and re-reading?
In Databricks I can do in Scala dfFoo.createOrReplaceTempView("temp_df_foo")
and it then in Python spark.read.table('temp_df_foo')
and Databricks will do all the work in the background.
Is something similar possible in Synapse?