question

PabloDeLuca-2548 avatar image
0 Votes"
PabloDeLuca-2548 asked ·

How to replicate data from different backends that are using SQL Servers (all hosted on Azure) to do Reports. What Azure Products do I need?

Suppose I have hosted on the same Resource on Azure 4 Web APIS on .NET Core with SQL Server. Each system has their own SQL Server Database on Azure. The systems are communicated using Azure Service Bus.

My objetive is to use Power BI or a similar software to create dynamic reports without impacting the performance of each application.

But the issue that I am facing is that I dont know where and how to store that (replicated) data that I want to view in a future with the previously mentioned Software.

I dont need all the data of all databases, only a few tables of each system.

For example: From System 1, I want the Data of the table Clients, from System 2 I want the data of the table Transactions and so on.

I dont want to affect the performance of the Databases of each system when I generate the reports, so I thought that maybe Azure has a way to replicate that tables in some place automatically.

To do this, what Azure Service do I need? Azure Synapse Analytic (DW)? Also do I need another product? Or maybe is there a better approach?

I will really appreciate some info to achieve what I want using Azure

Thanks in advance,

Pablo.







azure-synapse-analytics
· 1
10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Hello @PabloDeLuca-2548 , Just wanted to follow up and check if the below suggestion helped you resolve the issue . Also if case if you have a better work around or resolution please do share that with the community as it will help other community members .


Thanks & stay safe


Himanshu


0 Votes 0 ·
peschleg avatar image
0 Votes"
peschleg answered ·

This isn’t a question to answer in few sentences. It depend how much data you’ll load to Power BI, should it be basically transferred and visualized? How much data you load in every cycle how much data you’ll store in all? There are many questions to answer before giving a architectural recommendation.

Basic approach could be to load data directly to the Power BI service. Or use Azure Data Factory or Azure Synapse Studio (Currently in preview) to ELT/ETL the data to another Data store. If you need the DWH features of Synapse use them.

Sorry but with the given information I can’t give a better recommendation. May the first approach sufficient? Go for it!

· Share
10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

IanHenry-1688 avatar image
0 Votes"
IanHenry-1688 answered ·

If your azure sql databases are on premium you have read scale out. See the main man james serra: https://www.jamesserra.com/archive/2018/09/azure-sql-database-read-scale-out/

You can then use synapse polybase (data virtualization) to hit these or power bi directly or import as your use cases require

· Share
10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

RaGus-6891 avatar image
0 Votes"
RaGus-6891 answered ·

Hi!

I ca try to make a suggestion for your goal.

If you have a small/medium size date you can do the following stuff.
1) Replicate your data into a datalake storage account using databricks and delta lake format for persistence (so you can do merge and other nice transactional operations).
2) From delta table you can build a more "cleaned" layer using parquet files so can other tools can read them.
3) Use Snapse Analytics SQL on demand tools for reading the whole parquet data when you need it (for example in a PowerBI Table refresh) or a direct query.

Synapse Analytcs now is in preview again but I hope this product will reach the stable status asap.
If you have money to spend, consider also an ever running databricks cluster that offers to you also a "spark queryable endpoint" if you want reach your data using jdbc/odbc connector.

Feel free to ask for clearifications.

Ra

· Share
10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.