When we use Azure AutoML via User Interface, what data do they use to calculate the metrics?
Do they train test split? If so, the metrics return from the test data?
Or they use the whole data to validate the models. Thus, the metrics are from cross-validation.
If they use the whole data set to train, I should do the train-test split and only upload a train data set (I should clean data first). Then deploy the models with the test data to see how accurate the model is.
If it is that case, this function is such a useless function.
I can use Python SDK directly.
Please help me to clarify if it is that case. The metrics are only from cross-validation.