Cost savings through HTAP with Azure SQL

Azure Databricks
Azure Event Hubs
Azure Functions
Azure SQL Database
Azure Stream Analytics

This article outlines a solution for a hybrid transaction/analytical processing (HTAP) architecture. To process transactions, most systems use low-latency, high-volume operational workloads. For analytics, higher-latency, lower-volume workloads are more typical. HTAP architectures offer a solution for both workload types. By using in-memory databases, HTAP consolidates technologies to optimize queries on large volumes of data.

Architecture

Architecture diagram showing how data flows through an HTAP solution with Azure SQL Database at its center.

Download a Visio file of this architecture.

Dataflow

  1. Event Hubs ingests telemetry from on-premises facilities.
  2. Blob Storage captures the Event Hubs data and stores it for future analysis.
  3. Stream Analytics processes the data. In the solution's hot path, Azure Cosmos DB queries data from the previous two months. Azure Cosmos DB guarantees single-digit millisecond response times.
  4. If errors occur during data processing or storage, the system logs them in Azure Table Storage.
  5. Azure Functions uses the SQL Database elastic database client library to archive the data. This process partitions the data to optimize insert operations. The solution forms shards by horizontally distributing the data over several Azure SQL databases. Each database uses a partitioned clustered columnar index to compress tables. Response times on this cold path are usually below one second.
  6. An Azure Databricks cluster reprocesses the Blob Storage data. Specifically, Azure Databricks deserializes Avro files and sends the data to Event Hubs for optional analysis.

Components

  • Event Hubs is a fully managed streaming platform for big data.

  • Stream Analytics provides real-time serverless stream processing by running queries in the cloud and on edge devices.

  • Azure Cosmos DB is a globally distributed, multi-model database. With Azure Cosmos DB, your solutions can elastically scale throughput and storage across any number of geographic regions.

  • Table Storage is part of Azure Storage. This service stores structured NoSQL data in the cloud.

  • SQL Database is a relational database service that's part of the Azure SQL family. As a fully managed service, SQL Database handles database management functions. SQL Database also provides AI-powered, automated features that optimize performance and durability. Serverless compute and Hyperscale storage options automatically scale resources on demand.

  • Elastic database tools help you create and manage scaled-out databases. This feature of SQL Database includes a client library that you can use to develop sharded applications.

  • Blob Storage is a service that's part of Storage. Blob Storage offers optimized cloud object storage for large amounts of unstructured data.

  • Azure Databricks is a data analytics platform. Its fully managed Spark clusters process large streams of data from multiple sources. Azure Databricks cleans and transforms structureless data sets. It combines the processed data with structured data from operational databases or data warehouses. Azure Databricks also trains and deploys scalable machine learning and deep learning models.

  • Power BI is a collection of analytics services and apps. You can use Power BI to connect and display unrelated sources of data.

Scenario details

Azure SQL Database forms the core of this HTAP solution. The approach divides the data into horizontally distributed databases, or shards. Other main components include:

  • Azure Event Hubs for data ingestion.
  • Azure Stream Analytics for data processing.
  • Azure Functions for partitioning.
  • Azure Blob Storage for event storage.

Together, these services provide an HTAP solution that:

  • Reduces costs by providing fast access to insights on archived data. Latencies on the cool path drop from hours to less than seconds with this solution.
  • Simplifies archiving by automatically adding data to long-term storage.
  • Maximizes scalability by sharding data and using an elastic database.

Potential use cases

This solution applies to organizations that need low-latency access to large volumes of historical data. Examples include:

  • Online retailers that access customer history and demographic information to provide personalized experiences.
  • Energy providers that combine device data with analytics to manage smart power grids.
  • Businesses that engage in fraud prevention by identifying patterns in historical and real-time data. This scenario applies to the finance and financial services industries.
  • Manufacturers that rely on real-time event processing to identify problems. This scenario applies to the manufacturing industry.

Considerations

These considerations implement the pillars of the Azure Well-Architected Framework, which is a set of guiding tenets that can be used to improve the quality of a workload. For more information, see Microsoft Azure Well-Architected Framework.

This solution makes the following assumptions:

  • After you archive the data, you don't need to update or delete it.
  • The data schema only changes minimally over time.

Keep the following considerations in mind when implementing this solution:

Performance efficiency

Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. For more information, see Performance efficiency pillar overview.

  • To optimize performance:

    • Combine sharding with table compression.
    • Partition tables by date. Each shard contains data from a different period.
    • Align indexes with the date partitioning.
  • To scale up to more than 50,000 messages per second, use the elastic database client library from within Functions to:

    • Group messages by partition.
    • Split insert statements into small batches.

    This approach is suitable for systems that use 10 Standard S3 databases of type SQL Database. To host a columnar index, you need at least the Standard tier.

  • For best performance during insert operations, use table-valued parameters with stored procedures.

  • When you use the CREATE COLUMNSTORE INDEX statement, use the COLUMNSTORE_ARCHIVE option. This option provides the highest possible level of compression. A high compression level increases the time you need to store and retrieve data. But the resulting I/O performance should still be satisfactory.

Scalability

  • Use shards so that you can expand your system to meet demanding workloads. When you use sharded databases, you can add or remove shards to scale out or in. The split-merge tool helps you split and merge partitions.
  • Take advantage of the scaling functionality in Functions. Create functions that scale based on CPU and memory usage. Configure the functions to start new instances to accommodate unexpected workloads.
  • Increase the size of your Azure Databricks cluster to scale up Avro file reprocessing. The solution uses Azure Databricks to reprocess Avro files that Blob Storage has captured. Spark clusters in Azure Databricks can process all or part of the Avro file's path. By increasing the Azure Databricks cluster size, you can reprocess all the data within a required time frame. To handle increased volume from Azure Databricks, add instances of Event Hubs to the namespace as needed.

Resiliency

  • All of the components in this scenario are managed. At a regional level, they offer built-in resiliency.
  • For general guidance on designing resilient solutions, see Overview of the reliability pillar.

Cost optimization

Cost optimization is about looking at ways to reduce unnecessary expenses and improve operational efficiencies. For more information, see Overview of the cost optimization pillar.

To explore the cost of running this scenario, use the Azure pricing calculator, which preconfigures all Azure services. Adjust the parameters to match the traffic that you expect to receive.

The following table lists sample cost profiles for varying amounts of 1-kilobyte messages:

Size Message volume Profile
Small Fewer than 500 messages per second Small profile
Medium 1,500 messages per second Medium profile
Large More than 5,000 messages per second Large profile

Contributors

This article is maintained by Microsoft. It was originally written by the following contributors.

Principal author:

Next steps