Hi, I am new to Azure ML, and I have been trying to replicate the same structure presented in the MNIST tutorial, but I don't understand how to adapt it to my case.
I am running a python file from the experiment, but I don't understand how I can access data that is currently in a folder in the cloud file system from the script running in the experiment.
I have found many examples about accessing one single .csv file, but my data is made of many images.
From my understanding I should first load the folder to a datastore, then use Dataset.File.upload_directory to create a dataset containing my folder, and here is how I tried to do it:
# Create dataset from data directory datastore = Datastore.get(ws, 'workspaceblobstore') dataset = Dataset.File.upload_directory(path_data, target, pattern=None, overwrite=False, show_progress=True) file_dataset = dataset.register(workspace=ws, name='reduced_classification_dataset', description='reduced_classification_dataset', create_new_version=True)
But then I don't understand if and how I can access this data like a normal file system from my python script, or I need further steps to be able to do that.