I have linux image that provides predictions as service. We have predictions stored as files in mounted Azure file share.
Linux image use maximum 2 G of memory. Even in Azure it doesnt seem to need more. Locally in docker it allocates less than 1G.
But for a some reason depending on memory that we allocate to image dictates whether all files are available to linux image.
If I create container instance with 1 cpu and 12 G of memory all files are available.... although image does not need it... If I create instance with less than 12 G's of memory some of the files arent available and we get file not found errors in service.
Anyone has any clue what could cause this?