Wrong cutoff date for GPT4 Turbo in East US2

Gabriel Susai 35 Reputation points
2024-05-13T21:08:21.58+00:00

I recently deployed a gpt-4 (turbo-2024-04-09) model in East US 2 which has a cutoff date of Dec2023, as mentioned in the link below. When asked a question, the model says it includes information up to September 2021. Although I initially thought it was a glitch, I have observed that the model lacks information on incidents that occur in 2023.

https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-4-turbo

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
2,304 questions
0 comments No comments
{count} votes

Accepted answer
  1. AshokPeddakotla-MSFT 28,316 Reputation points
    2024-05-14T01:32:36.0233333+00:00

    Gabriel Susai Greetings!

    I understand that you are getting wrong answer related to model cutoff data. This is a expected behavior. Please see FAQs for more details.

    I asked the model when its knowledge cutoff is and it gave me a different answer than what is on the Azure OpenAI model's page. Why does this happen?

    This is expected behavior. The models aren't able to answer questions about themselves. If you want to know when the knowledge cutoff for the model's training data is, consult the models page.

    I asked the model a question about something that happened recently before the knowledge cutoff and it got the answer wrong. Why does this happen?

    This is expected behavior. First there's no guarantee that every recent event was part of the model's training data. And even when information was part of the training data, without using additional techniques like Retrieval Augmented Generation (RAG) to help ground the model's responses there's always a chance of ungrounded responses occurring. Both Azure OpenAI's use your data feature and Bing Chat use Azure OpenAI models combined with Retrieval Augmented Generation to help further ground model responses.

    The frequency that a given piece of information appeared in the training data can also impact the likelihood that the model will respond in a certain way.

    Asking the latest GPT-4 Turbo Preview model about something that changed more recently like "Who is the prime minister of New Zealand?", is likely to result in the fabricated response Jacinda Ardern. However, asking the model "When did Jacinda Ardern step down as prime minister?" Tends to yield an accurate response which demonstrates training data knowledge going to at least January of 2023.

    So while it is possible to probe the model with questions to guess its training data knowledge cutoff, the model's page is the best place to check a model's knowledge cutoff.

    I hope this helps. Do let me know if you have any further queries.

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful