Plan your Azure Time Series Insights Preview environment

This article describes best practices to plan and get started quickly by using Azure Time Series Insights Preview.

Note

For best practices to plan a general availability Time Series Insights instance, read Plan your Azure Time Series Insights general availability environment.

Best practices for planning and preparation

Best practices surrounding planning for and preparing your environment are described further in the following articles:

Azure Time Series Insights employs a pay-as-you-go business model. For more information about charges and capacity, read Time Series Insights pricing.

The preview environment

When you provision a Time Series Insights Preview environment, you create two Azure resources:

  • An Azure Time Series Insights Preview environment
  • An Azure Storage general-purpose V1 account

As part of the provisioning process, you specify whether you want to enable a warm store. Warm store provides you with a tiered query experience. When enabled, you must specify a retention period between 7 and 30 days. Queries executed within the warm store retention period generally provide faster response times. When a query spans over the warm store retention period, it's served from cold store.

Queries on warm store are free, while queries on cold store incur costs. It's important to understand your query patterns and plan your warm store configuration accordingly. We recommend that interactive analytics on the most recent data reside in your warm store and pattern analysis and long-term trends reside in cold.

Note

To read more about how to query your warm data, read the API Reference.

To start, you need three additional items:

Review preview limits

General availability and preview comparison

The following table summarizes several key differences between Azure Time Series Insights general availability (GA) and preview instances.

GA Preview
First-class citizen Event-centric Time-series-centric
Semantic reasoning Low-level (reference data) High-level (models)
Data contextualization Non-device level Device and non-device level
Compute logic storage No Stored in type variables part of model
Storage and access control No Enabled via model
Aggregations/Sampling No Event weighted and time weighted
Signal reconstruction No Interpolation
Production of derived time series No Yes, merges and joins
Language flexibility Non-composable Composable
Expression language Predicate string Time series expressions (predicate strings, values, expressions, and functions)

Property limits

Time Series Insights property limits have increased to 1,000 from a maximum cap of 800 in GA. Supplied event properties have corresponding JSON, CSV, and chart columns that you can view within the Time Series Insights Preview explorer.

SKU Maximum properties
Preview PAYG 1,000 properties (columns)
GA S1 600 properties (columns)
GA S2 800 properties (columns)

Event sources

A maximum of two event sources per instance is supported.

By default, Preview environments support ingress rates up to 1 megabyte per second (MB/s) per environment. Customers may scale their Preview environments up to 16 MB/s throughput if necessary. There is also a per-partition limit of 0.5 MB/s.

API limits

REST API limits for Time Series Insights Preview are specified in the REST API reference documentation.

Configure Time Series IDs and Timestamp properties

To create a new Time Series Insights environment, select a Time Series ID. Doing so acts as a logical partition for your data. As noted, make sure to have your Time Series IDs ready.

Important

Time Series IDs can't be changed later. Verify each one before final selection and first use.

You can select up to three keys to uniquely differentiate your resources. For more information, read Best practices for choosing a Time Series ID and Storage and ingress.

The Timestamp property is also important. You can designate this property when you add event sources. Each event source has an optional Timestamp property that's used to track event sources over time. Timestamp values are case sensitive and must be formatted to the individual specification of each event source.

Tip

Verify the formatting and parsing requirements for your event sources.

When left blank, the Event Enqueue Time of an event source is used as the event Timestamp. If you send historical data or batched events, customizing the Timestamp property is more helpful than the default Event Enqueue Time. For more information, read about how to add event sources in Azure IoT Hub.

Understand the Time Series Model

You can now configure your Time Series Insights environment’s Time Series Model. The new model makes it easy to find and analyze IoT data. It enables the curation, maintenance, and enrichment of time series data and helps to prepare consumer-ready data sets. The model uses Time Series IDs, which map to an instance that associates the unique resource with variables, known as types, and hierarchies. Read about the new Time Series Model.

The model is dynamic, so it can be built at any time. To get started quickly, build and upload it prior to pushing data into Time Series Insights. To build your model, read Use the Time Series Model.

For many customers, the Time Series Model maps to an existing asset model or ERP system already in place. If you don't have an existing model, a prebuilt user experience is provided to get up and running quickly. To envision how a model might help you, view the sample demo environment.

Shape your events

You can verify the way that you send events to Time Series Insights. Ideally, your events are denormalized well and efficiently.

A good rule of thumb:

  • Store metadata in your Time Series Model.
  • Ensure that Time Series Mode, instance fields, and events include only necessary information, such as a Time Series ID or Timestamp property.

For more information, read Shape events.

Business disaster recovery

This section describes features of Azure Time Series Insights that keep apps and services running, even if a disaster occurs (known as business disaster recovery).

High availability

As an Azure service, Time Series Insights provides certain high availability features by using redundancies at the Azure region level. For example, Azure supports disaster recovery capabilities through Azure's cross-region availability feature.

Additional high-availability features provided through Azure (and also available to any Time Series Insights instance) include:

Make sure you enable the relevant Azure features to provide global, cross-region high availability for your devices and users.

Note

If Azure is configured to enable cross-region availability, no additional cross-region availability configuration is required in Azure Time Series Insights.

IoT and event hubs

Some Azure IoT services also include built-in business disaster recovery features:

Integrating Time Series Insights with the other services provides additional disaster recovery opportunities. For example, telemetry sent to your event hub might be persisted to a backup Azure Blob storage database.

Time Series Insights

There are several ways to keep your Time Series Insights data, apps, and services running, even if they're disrupted.

However, you might determine that a complete backup copy of your Azure Time Series environment also is required, for the following purposes:

  • As a failover instance specifically for Time Series Insights to redirect data and traffic to
  • To preserve data and auditing information

In general, the best way to duplicate a Time Series Insights environment is to create a second Time Series Insights environment in a backup Azure region. Events are also sent to this secondary environment from your primary event source. Make sure that you use a second dedicated consumer group. Follow that source's business disaster recovery guidelines, as described earlier.

To create a duplicate environment:

  1. Create an environment in a second region. For more information, read Create a new Time Series Insights environment in the Azure portal.
  2. Create a second dedicated consumer group for your event source.
  3. Connect that event source to the new environment. Make sure that you designate the second dedicated consumer group.
  4. Review the Time Series Insights IoT Hub and Event Hubs documentation.

If an event occurs:

  1. If your primary region is affected during a disaster incident, reroute operations to the backup Time Series Insights environment.
  2. Use your second region to back up and recover all Time Series Insights telemetry and query data.

Important

If a failover occurs:

  • A delay might also occur.
  • A momentary spike in message processing might occur, as operations are rerouted.

For more information, read Mitigate latency in Time Series Insights.

Next steps