Use one-click ingestion to ingest JSON data from a local file to an existing table in Azure Data Explorer

One-click ingestion enables you to quickly ingest data in JSON, CSV, and other formats into a table and easily create mapping structures. The data can be ingested either from storage, from a local file, or from a container, as a one-time or continuous ingestion process.

This document describes using the intuitive one-click wizard in a specific use case to ingest JSON data from a local file into an existing table. Use the same process with slight adaptations to cover a variety of different use cases.

For an overview of one-click ingestion and a list of prerequisites, see One-click ingestion. For different types or sources of data, see Use one-click ingestion to ingest CSV data from a container to a new table in Azure Data Explorer.

Note

To enable access between a cluster and a storage account without public access (restricted to private endpoint/service endpoint), see Create a Managed Private Endpoint.

Ingest new data

  1. In the left menu of the Web UI, select Data.

  2. From the Quick actions section, select Ingest new data. Alternatively, from the All actions section, select Ingest new data and then Ingest.

    Screenshot for the Web UI where you select one-click ingestion for a table.

Select an ingestion type

  1. In the Ingest new data window, the Destination tab is selected.

  2. The Cluster and Database fields are auto-populated. You can change by select an existing cluster and database name from the drop-down menu.

    1. To add a new connection to a cluster, select Add cluster connection below the auto-populated cluster name.

      Ingest new data tab- add a new cluster connection.

    2. In the popup window, enter the Connection URI for the cluster you are connecting.

    3. Enter a Display Name that you want to use to identify this cluster, and select Add.

      Add cluster URI and description to add a new cluster connection in Azure Data Explorer.

  3. If the Table field isn't automatically filled, select an existing table name from the drop-down menu.

  4. Select Next: Source

Source tab

  1. Under Source type, do the following steps:

    1. Select from file

    2. Select Browse to locate up to 10 files, or drag the files into the field. The schema-defining file can be chosen using the blue star.

    3. Select Next: Schema

      One-click ingestion from file.

Edit the schema

The Schema tab opens.

  • Compression type will be selected automatically by the source file name. In this case, the compression type is JSON

  • When you select JSON, you must also select Nested levels, from 1 to 100. The levels determine the table column data division.

    Select Nested levels.

  • For tabular formats, you can select Keep current table schema. Tabular data doesn't necessarily include the column names which are used to map source data to the existing columns. When this option is checked, mapping is done by-order, and the table schema will remain the same. If this option is unchecked, new columns will be created for incoming data, regardless of data structure.

    Screen shot showing the 'keep current table schema' option checked when using tabular data format.

Add nested JSON data

To add columns from JSON levels that are different than the main Nested levels selected above, do the following steps:

  1. Click on the arrow next to any column name, and select New column.

    Screenshot of options to add a new column - schema tab during one click ingestion process - Azure Data Explorer.

  2. Enter a new Column Name and select the Column Type from the dropdown menu.

  3. Under Source, select Create new.

    Screenshot - create new source for adding nested JSON data in one click ingestion process - Azure Data Explorer.

  4. Enter the new source for this column and click OK. This source can come from any JSON level.

    Screenshot showing window to name the new data source for the added column.

  5. Select Create. Your new column will be added at the end of the table.

    Screenshot - create a new column during one click ingestion in Azure Data Explorer.

Edit the table

The changes you can make in a table depend on the following parameters:

  • Table type is new or existing
  • Mapping type is new or existing
Table type Mapping type Available adjustments
New table New mapping Change data type, Rename column, New column, Delete column, Update column, Sort ascending, Sort descending
Existing table New mapping New column (on which you can then change data type, rename, and update),
Update column, Sort ascending, Sort descending
Existing mapping Sort ascending, Sort descending

Note

When adding a new column or updating a column, you can change mapping transformations. For more information, see Mapping transformations

Note

  • For tabular formats, you can't map a column twice. To map to an existing column, first delete the new column.
  • You can't change an existing column type. If you try to map to a column having a different format, you may end up with empty columns.

Command editor

Above the Editor pane, select the v button to open the editor. In the editor, you can view and copy the automatic commands generated from your inputs.

One click ingestion edit view.

Start ingestion

Select Next: Summary to begin data ingestion.

Start ingestion.

Complete data ingestion

In the Data ingestion completed window, all three steps will be marked with green check marks when data ingestion finishes successfully.

One click ingestion completed.

Important

To set up continuous ingestion from a container, see Use one-click ingestion to ingest CSV data from a container to a new table in Azure Data Explorer

Explore quick queries and tools

In the tiles below the ingestion progress, explore Quick queries or Tools:

  • Quick queries include links to the Web UI with example queries.

  • Tools includes links to Undo or Delete new data on the Web UI, which enable you to troubleshoot issues by running the relevant .drop commands.

    Note

    You might lose data when you use .drop commands. Use them carefully. Drop commands will only revert the changes that were made by this ingestion flow (new extents and columns). Nothing else will be dropped.

Next steps