TIBCO Spotfire Analyst

This article describes how to use TIBCO Spotfire Analyst with an Azure Databricks cluster or an Azure Databricks SQL endpoint.

Step 1: Get Azure Databricks connection information


  1. Get a personal access token.
  2. Get the Server Hostname, Port, and HTTP Path field values from the JDBC/ODBC tab for your cluster.

Sql endpoint

  1. Get a personal access token.
  2. Get the Server Hostname, Port, and HTTP Path field values from the Connection Details tab for your SQL endpoint.

Step 2: Configure Azure Databricks cluster connection in TIBCO Spotfire

  1. In TIBCO Spotfire Analyst, on the navigation bar, click the plus (Files and data) icon and click Connect to.
  2. Select Databricks and click New connection.
  3. In the Apache Spark SQL dialog, on the General tab, for Server, enter the Server Hostname and Port field values from Step 1, separated by a colon.
  4. For Authentication method, select Username and password.
  5. For Username, enter the word token.
  6. For Password, enter your personal access token from Step 1.
  7. On the Advanced tab, for Thrift transport mode, select HTTP.
  8. For HTTP Path, enter the HTTP Path field value from Step 1.
  9. On the General tab, click Connect.
  10. After a successful connection, in the Database list, select the database you want to use, and then click OK.

Step 3: Select the Azure Databricks data to analyze

You select data in the Views in Connection dialog.

Available Tables

  1. Browse the available tables in Azure Databricks.
  2. Add the tables you want as views, which will be the data tables you analyze in TIBCO Spotfire.
  3. For each view, you can decide which columns you want to include. If you want create a very specific and flexible data selection, you have access to a range of powerful tools in this dialog, such as:
    • Custom queries. With custom queries, you can select the data you want to analyze by typing a custom SQL query.
    • Prompting. Leave the data selection to the user of your analysis file. You configure prompts based on columns of your choice. Then, the end user who opens the analysis can select to limit and view data for relevant values only. For example, the user can select data within a certain span of time or for a specific geographic region.
  4. Click OK.

Step 4: Push-down queries to Azure Databricks or import data

When you have selected the data that you want to analyze, the final step is to choose how you want to retrieve the data from Azure Databricks. A summary of the data tables you are adding to your analysis is displayed, and you can click each table to change the data loading method.

orders table example

The default option for Azure Databricks is External. This means the data table is kept in-database in Azure Databricks, and TIBCO Spotfire pushes different queries to the database for the relevant slices of data, based on your actions in the analysis.

You can also select Imported and TIBCO Spotfire will extract the entire data table up-front, which enables local in-memory analysis. When you import data tables, you also use analytical functions in the embedded in-memory data engine of TIBCO Spotfire.

The third option is On-demand (corresponding to a dynamic WHERE clause), which means that slices of data will be extracted based on user actions in the analysis. You can define the criteria, which could be actions like marking or filtering data, or changing document properties. On-demand data loading can also be combined with External data tables.