I figured out that there are some primary metrics I can choose when I run an automated ML experiment. Yet the number of primary metrics is fewer than the run metrics in the result page. I want to deal with imbalanced data(10:1 or 20:1) and
looked up the links below:
https://docs.microsoft.com/en-us/azure/machine-learning/concept-manage-ml-pitfalls#identify-models-with-imbalanced-data
and
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-train
It seems F1 score is recommended to evaluate each model with imbalanced data.
Here are my questions:
Is there any way to set F1 score or multiple measures as a primary metric?
If there is no such way, should I do it manually?
Of all the given primary metrics, which primary metric is the most appropriate(to build a Classification model with imbalanced data)?
Thanks.