QueueTriggered FunctionApp failing with : Timeout exceeded

KS, Sharath 21 Reputation points
2020-08-28T12:13:05.49+00:00

I am running a QueueTriggered FunctionApp developed in Python under App Service Plan : myFunction (EP3: 1)
Earlier I had set functionTimeout host.json to 02:00:00 but it failed with error

Timeout value of 02:00:00 was exceeded by function: Functions.myFunction
EXCEPTION: Microsoft.Azure.WebJobs.Host.FunctionTimeoutException
Failed method: Microsoft.Azure.WebJobs.Host.Executors.FunctionExecutor+<TryHandleTimeoutAsync>d__31.MoveNext

But my function app does nothing but connecting to Snowflake whenever a new Queue message is added and executing a procedure on Snowflake which doesn't even take more than 5 mins to complete the process.

After getting this error I thought of increasing the time limit and changed it to 4 hours then to 6 and then to 8 hours. But issue continued, and without getting any solution I changed it to 23:59:59. To my surprise this also failed with error

Timeout value of 23:59:59 was exceeded by function: Functions.myFunction

I couldn't find proper solution for this issue anywhere and so posting my question here. Please give me a solution to come out of this issue permanently.

host.json

{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": false,
        "excludedTypes": "Request"
      }
    }
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[1.*, 2.0.0)"
  },
  "functionTimeout":"23:59:59"
}
Azure Functions
Azure Functions
An Azure service that provides an event-driven serverless compute platform.
4,249 questions
Azure Queue Storage
Azure Queue Storage
An Azure service that provides messaging queues in the cloud.
96 questions
{count} votes

Accepted answer
  1. JayaC-MSFT 5,526 Reputation points
    2020-09-02T12:16:52.977+00:00

    @KSSharath-7336 Thank you for sharing the details. After investigation we figured that your application is using Snowflake Python connector, which expects any connection opened to be closed explicitly.

    However, at times we see performance issues for python functions(in case of simultaneous calls as well). So we suggest to maximize the number of FUNCTIONS_WORKER_PROCESS_COUNT.

    This behavior is expected due to the single threaded architecture of Python.
    In scenarios such as , you are using blocking HTTP sync calls or IO bound calls which will block the entire event loop.

    It is documented in our Python Functions Developer reference on how to handle such scenario’s: https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#scaling-and-concurrency . Especially the Async part.

    Here are the two methods to handle this:

    1. Async calls
    2. Add more Language worker processes per host, this can be done by using application setting : FUNCTIONS_WORKER_PROCESS_COUNT up to a maximum value of 10. ( So basically, for the CPU-bound workload you are simulating with any loops, we do recommend setting FUNCTIONS_WORKER_PROCESS_COUNT to a higher number to parallelize the work given to a single instance (docs here).
      [Please note that each new language worker is spawned every 10 seconds until they are warmed up.]

    Here is a GitHub issue which talks about this issue in detail : https://github.com/Azure/azure-functions-python-worker/issues/236

    Please let me know if this helps. If it does, please 'Accept as answer' and ‘Up-vote’ so that it can help others in the community looking for help on similar topics.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful