It's terrific and unique that Azure Storage supports server-to-server transfers of data from arbitrary URLs on the internet to a blob; however, it cannot access some URLs as it erroneously unescapes characters needed in the URL. It unescapes "https://example.com/foo%2fbar.txt" to "https://example.com/foo/bar.txt" as a small example. I'm looking for any help or guidance to get Azure Storage to PUT from these URLs without Azure Storage unescaping them.
"%2F" may not be equivalent to "/" in URLs:
My story is that I'm looking to build a tool or toolkit to copy Google Takeout data to Azure Storage blobs quickly for archival storage if some human-created or robot-created disaster happens to the Google account. The Google Takeout URLs look like this (this URL is genuine but has long expired):
Azure Storage unescapes the "%2F"s to a "/" before the "?", resulting in a 404 error. Google's endpoint requires a "%2F" there.
My goal is to get Azure Storage to transfer data directly from Google Takeout's signed URLs massively in parallel. The benchmark I'm trying to achieve is moving 1TB from Google Takeout to Azure Storage in less than a minute. Unfortunately, Azure Storage's HTTP client limitations with this unwanted un-escaping put a halt to this. While it does seem Google uses buckets to back their Google Takeout offering, I do not have access to the data in Google's Takeout buckets other than the presumably signed URLs they give me through Takeout's web interface and cannot use Azure's current implementation of downloading from Google Cloud Storage buckets.
For testing, Google Takeout's UI authenticates too much and takes a long time to generate downloads. Alternatively, you can generate similar URLs by clicking a link in Google Cloud Storage's UI for an "Authenticated URL" of an object but those are still temporary. I'm sure other Google services can generate similar URLs as well. As it is a pain to generate Google URLs to demonstrate the issue continually, I've set up a small demo server here. The server has also been helpful to diagnose what requests Azure Storage are using:
The URLs of interest are:
https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/red/blue.txt will show a 404 as that isn't the URL desired.
https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/red%2Fblue.txt will show a 200 as that is the URL desired.
https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/normal.txt will show a 200 as it's just a run of the mill URL.
You can find the source for that Go server here:
I have my test server hosted on Google Cloud Run as it is cost-efficient for hosting scale to zero applications, but I see no reason you cannot run your test server locally and expose it on a port to the public internet with something like ngrok.com to Azure Cloud Storage, anywhere else that supports running containerized applications and expose it publically, or any platform that can run a Go application and expose it publically.
The server will also log out to STDOUT any requests.
Here is the source for a small C# client counterpart that you can edit, compile, and run to demonstrate the issue:
As mentioned, you cannot store "https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/red%2Fblue.txt" to a blob in a container, but any other URL served from the server mentioned above will work. If I request "https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/red%2Fblue.txt" from the demo client, the server sees "https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/red/blue.txt". I'm able to request "https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/normal.txt" just fine.
https://put-block-from-url-esc-issue-demo-server-3vngqvvpoq-uc.a.run.app/red%252Fblue.txt does not work as well.
All said, any help would be appreciated. Up to 180TB, Azure Storage is the most cost-efficient way to archive Google Takeout, and with this issue solved, possibly the fastest!