Muokkaa

Jaa


Migrate data between Microsoft Dataverse environments using the dataflows OData connector

Note

Consider using the Power Query Dataverse Connector with Dataflows rather than the OData connector. Dataflows are the recommended method to migrate data between Dataverse environments. More information What are dataflows?

Dataverse Web API works with any technology that supports OData and OAuth. There are many options available to move data in and out of Dataverse. OData connector is one of the dataflows, which is designed to support migration and synchronization of large datasets in Dataverse.

In this article, we walk you through how to migrate data between Dataverse environments using the dataflows OData connector.

Prerequisites

  • System Administrator or System Customizer security role permission on both the source and target environments.

  • Power Apps, Power Automate, or Dataverse license (per app or per user).

  • Two Dataverse environments with database.

Scenarios

  • A one-time cross-environment or cross-tenant migration is needed (for example, geo-migration).

  • A developer needs to update an app that is being used in production. Test data is needed in their development environment to easily build out changes.

Step 1: Plan out the dataflow

  1. Identify the source and target environments.

    • The source environment is where the data is migrated from.

    • The target environment is where the data is migrated to.

  2. Ensure that the tables are already defined in the target environment. Ideally both environments should have the same tables defined with the same solution.

  3. When importing relationships, multiple dataflows are required.

    Tables that are one (parent/independent) to many (children/dependent) require separate dataflows. Configure the parent dataflow to run before any child tables, since the data in the parent needs to be loaded first to correctly map to the columns in the corresponding child tables. Additionally, you must create an alternate key in the parent table before being given the option to set a lookup column on the child table. Without a key defined on a parent table, you will be unable to populate lookup columns on any child tables.

Step 2: Get the OData endpoint

Dataverse provides an OData endpoint that does not require additional configuration to authenticate with the dataflows' connector. It is relatively easy to connect to the source environment.

This article will walk through how to set up a new dataflow with the OData connector. For information on connecting to all data sources supported by dataflows, see Create and use dataflows.

From the source environment, get the OData endpoint for that environment:

  1. Sign in to Power Apps.

  2. Select the required source environment from the upper-right corner.

  3. Select the Settings (gear) icon in the upper-right corner, and then select Advanced Settings.

  4. On the Settings page, select the drop-down arrow next to Settings, and select Customizations.

  5. On the Customizations page, select Developer Resources.

  6. Copy the Service Root URL to Notepad.

    Copy the service root URL in the developer resources.

Step 3: Create a new OData dataflow

In the target environment, create a new dataflow with the OData connector.

  1. Sign in to Power Apps.

  2. Select the required target environment from the upper-right corner.

  3. In the left navigation pane, expand the Data menu, and then select Dataflows.

  4. Select New dataflow to create a new dataflow. Provide a meaningful name for the dataflow. Select Create.

    Prompt for a new dataflow.

  5. Select the OData connector.

    Select OData source.

  6. In the Connection settings dialog box, type the column values:

    Confirm the column values are correct.

    Column Description
    URL Provide the Service Root URL in the URL column of the connection settings.
    Connection Create new connection. This will be automatically chosen if you have not made an OData connection in dataflows before.
    Connection name Optionally, rename the connection name, but a value is automatically populated.
    On-premises data gateway None. An on-premises data gateway is not needed for connections to this cloud service.
    Authentication kind Organizational account. Select Sign in to open the sign-in dialog that authenticates the account associated with the connection.

    Important

    Disable pop-up and cookies blockers in your browser in order to configure the Microsoft Entra ID authentication. This is similar to the fact that you are using the Dataverse OData endpoint or any other OAuth-based authentication data source.

  7. Select Next in the lower right.

Step 4: Select and transform data with Power Query

Use Power Query to select the tables and also transform data as per your requirement.

First, select the tables that need to be transferred. You can browse all tables in the source environment and preview some of the data in each table.

Power query navigator.

  1. Select one or multiple tables as needed, and then select Transform data.

    Note

    When importing relationships, remember that the parent table dataflow needs to be imported before the child ones. The data for the child dataflow will require data to be in the parent table for it to correctly map, otherwise it might throw an error.

  2. In the Power Query - Edit queries window, you can transform the query before import.

    • If you are only migrating data, there should not be a need to modify anything here.

    • Reducing the number of unnecessary columns will improve the dataflow performance for larger data sets.

    Tip

    You can go back to choose more tables in the Get data ribbon option for the same OData connector.

  3. Select Next in the lower right.

Step 5: Configure target environment settings

This section describes how to define the target environment settings.

Step 5.1: Map tables

For each table chosen, select the behavior for importing that table in these settings and then select Next.

Map tables.

  • Load to existing table (recommended)

    • The dataflow syncs data from the source environment's table to the target environment, and the same table schema is already defined in the target environment.
    • Ideally, use the same solution in both target and source environments to make data transfer seamless. Another advantage to having a predefined table is more control over which solution the table is defined in and the prefix.
    • Choose Delete rows that no longer exist in the query output. This ensures that the relationships will map correctly because it maintains the values for the lookups. To use this feature, you must first define an Define alternate keys to reference rows on the target/existing table so the dataflow can determine whether to update existing records or create new ones.

      Note

      This option should only be used if the goal is to make data in source and target the same. If another process in the destination environment adds data to the same table (or if there is other existing data in the table) it will be deleted by this dataflow.

    • If the schema is identical in both source and target tables, you can select Auto map to quickly map the columns.
    • Requires a key configuration in the target environment (as the unique identifier columns are not available to modify).

    Important

    The 'delete rows' option is only available when a key is specified. It is possible to have a table without a key, but a key is required when you want to update or delete records since it is the unique identifier the system uses to perform these tasks. You can add a key directly in the Dataverse table if your table does not have a key and you want to use the delete or update functionality provided by Dataflows.

    More information: Define alternate keys using Power Apps portal

  • Load to new table (not recommended)

    • Ideally there should be a table predefined in the target environment from the same solution import as the source environment. However, there are cases where this might not be feasible, so this is an option if there is no existing table to load to.
    • It creates a new custom table in the target environment's default solution.
  • There is an option to Do not load, but do not include tables in the dataflow that are not being loaded. You can select Back from this menu to return to the Power Query menu and remove the tables that are not needed.

Step 5.2: Refresh settings

Select Refresh manually since this is a one-time migration and then select Create.

Step 6: Run the dataflow

The initial dataflow load initiates when you select Create.

Initial dataflow load.

You can manually initiate a dataflow by selecting (...) in the dataflows list. Make sure to run dependent dataflows after the parent flows have completed.

Refresh manually.

Tips

  • Try out one table first to walk through the steps, then build out all the dataflows.

  • If there are more tables that contain larger amounts of data, consider configuring multiple separate dataflows for individual tables.

  • One-to-many relationships will require separate dataflows for each table. Configure and run the parent (aka one, or independently) table dataflow before the child table.

  • If there are errors with the dataflow refresh, you can view the refresh history in the (...) menu in the dataflows list and download each refresh log.

Limitations