question

Arishtat67 avatar image
0 Votes"
Arishtat67 asked Arishtat67 commented

Enterprise Data Lake for multiple legacy systems

(Guidance level questions, mostly)

Suppose you have a government agency responsible for motor vehicle and driver license registration (much like DMV in the US). They have separate legacy systems for vehicle registration and driver license registration and driver infractions, which may or may not use a central customer data system. Other organizations, both government and private, require read access to this data as well as reporting and analysis, both curated and ad hoc.

I've been learning Data Lake technologies trying to assess if dumping all of the OLTP data into a data lake to form an enterprise data lake could fulfill all these various needs. What I've learned so far doesn't make me confident enough that could recommend this general solution.

Quick Read Access
Suppose the police need quick read access to check to whom a vehicle is registered (and history as well), or how many infractions a driver has. These are both cases where row-level access is required, suggesting that we should have a raw area using Avro. Or possibly construct separate areas for current and historical data on top of raw area as the current data is most oftten used. I haven't gotten as far as building a proof-of-concept solution that would allow me to test if this solution would give us good enough performance (assuming the competing solution would be e.g. Elastic).

Analytics and Reporting
Other actors, such as the department of transportation and the transportation industry, want curated reports as well as do analytics to help their decision making. For that, AFAIK, we should construct separate areas using column-oriented file format such as Parquet, which would then open the doors for various analytics and reporting technologies.

There is a lot in this solution that appeals to me, but I have a lot of concerns as well. All OLTP systems are on-prem and all sorts of database technologies so transferring data to Azure Data Lake is a bit of a challenge, but Data Factory should be able handle that. Data security is a concern. Read access requires fine-grained access control as it deals with sensitive data. On the upside, as one example, instead of us building a service for the police that returns vehicle or infraction data, we could just give the police limited access to the appropriate areas in the data lake and tell them to build their app. Similarly, we could grant the department of transportation data scientists access to the curated data area so they can do their own analytics without bothering us. My main motivation for even researching this solution is that it brings all the data together, passed and future, and opens new possibilities for interaction between different actors without setting up new projects every time new data is needed. Instead, we could just say, "here's the data, knock yourself out".

What I'm looking for is some guidance as to the viability of this solution, so that I can make the decision if I should pursue it or abandon it altogether. I realize this is very high-level, but I'm sure others have struggled with the same questions. Whitepapers, references and case studies would be greatly appreciated.

azure-data-lake-storageazure-data-lake-analytics
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

1 Answer

HimanshuSinha-MSFT avatar image
0 Votes"
HimanshuSinha-MSFT answered Arishtat67 commented

Hello @Arishtat67,
Thanks for the ask and using the Microsoft Q&A platform .
As mentioned in your case data are coming from different sources and i agree that azure data factory should do the trick and mat you can also consider useing Synapse and it also supports SHIR ( in your this is a vital ) .

You did asked about the security , row level security is not available in ADL and it is only possible when you funnel this data to a different system like DWH or Synapse ( which supports RLS ) , read here ..

The great scenario which you have potraited , lets say if the police is looking for my driving history . The ask is do they need this in 2 mins ( when they pulled over ) or they need this info in 2 hrs . If they need in two mins then you will have to make sure that the data in stored in system which is indexed / partitioned properly . If the tuen around times is 2 hours , I think you can move the records to a diffenet data lake using ADF( may something else ) and cops can access the data .
Not sure if you are considering Azure databricks as it also supports RLS .



Thanks
Himanshu
Please do consider clicking on "Accept Answer" and "Up-vote" on the post that helps you, as it can be beneficial to other community members


· 2
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Hello @Arishtat67,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet .In case if you have any resolution please do share that same with the community as it can be helpful to others . Otherwise, will respond back with the more details and we will try to help .
Thanks
Himanshu

0 Votes 0 ·

I may be wrong here, but my understanding is that we wouldn't initially store data into Synapse but rather to Azure Data Lake. The idea is to utilize different areas and define access privileges per area. The data from the source is initially stored to the raw area with very limited access. Some of the raw data is then curated and stored into other areas. My greatest concern is that the quick read access needs to be competitive with solution like Elastic, so we are talking about < 1 sec response time for queries (e.g. police queries someones driver's license).

0 Votes 0 ·