I am trying to find the best way to run my machine learning models on GPUs for inference as an http request. Do Azure functions support GPUs? if not, what are other options I can look into?
note: I also want to use packaged models, not necessarily ones of my own creation (such as easyOCR for python)