@Ludmiła Szojda Thanks for getting back and clarifying your ask. Regarding your question about Implement meaningful human oversight
, in the context of AI systems generally refers to the practice of having human involvement in the decision-making process of the AI system. This is to ensure that the AI system’s decisions align with human values, ethics, and legal norms.
In the context of Azure OpenAI services, it means that while the AI system can automate certain tasks and make predictions or suggestions, the final decision should be validated or approved by a human, especially in critical or sensitive situations. This is to prevent any potential misuse of the AI system and to ensure that the AI system’s behavior aligns with Microsoft’s responsible AI principles.
On a side note the same is also explained in the Azure Open AI transparency note documentation:
Not suitable for scenarios where up-to-date, factually accurate information is crucial unless you have human reviewers or are using the models to search your own documents and have verified suitability for your scenario. The service does not have information about events that occur after its training date, likely has missing knowledge about some topics, and may not always produce factually accurate information.
Hope this answers.