question

DimitriB-1079 avatar image
0 Votes"
DimitriB-1079 asked PRADEEPCHEEKATLA-MSFT edited

Accessing dataframe created in Scala from Python command

Is there a way to create a Spark dataframe in Scala command, and then access it in Python, without explicitly writing it to disk and re-reading?

In Databricks I can do in Scala dfFoo.createOrReplaceTempView("temp_df_foo") and it then in Python spark.read.table('temp_df_foo') and Databricks will do all the work in the background.

Is something similar possible in Synapse?

azure-synapse-analyticsazure-databricks
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

PRADEEPCHEEKATLA-MSFT avatar image
1 Vote"
PRADEEPCHEEKATLA-MSFT answered

@DimitriB-1079 Welcome to the Microsoft Q&A platform.


You can create an Apache Spark pool in Azure Synapse Analytics and run the same queries which you are running in Azure Databricks.


9511-synapse-sparkreadtable.jpg


Reference: Quickstart: Create an Apache Spark pool (preview) in Azure Synapse Analytics using web tools.


Hope this helps. Do let us know if you any further queries.




Do click on "Accept Answer" and Upvote on the post that helps you, this can be beneficial to other community members.



5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

euangMS avatar image
1 Vote"
euangMS answered

Exact same code should work in Synapse Spark.

5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.