question

DavidBeavon-2754 avatar image
0 Votes"
DavidBeavon-2754 asked DavidBeavon-2754 commented

Instance Pools in Azure Databricks (WHY?)

The instance pools in azure databricks allow you to define a custom pool of VM's that are warmed up and ready for use. You can specify a min idle quantity which will ensure that there are always some on hand when you need them. The end result is that your clusters can be started several minutes faster than otherwise.

The instance pools are nice. But shouldn't this be the way things behave in the first place? I always thought that the whole point of doing computing in the cloud that we share a pool of resources with lots of other customers. That introduces efficiencies. One of them should be, for example, that a cluster of VM's are already prepared for use and don't need to be booted from scratch.

There is little that can be customized in the custom instance pool. We use standard CPU "Standard_DS3_v2", RAM, cores, and we specify the version of the LTS databricks runtime. These selections should be common to a ton of other customers. Given these common factors, it seems that databricks should be able to manage a pool on my behalf. If anything, they should just display a checkbox that says "Your cluster looks pretty typical, do you want to use our standard instance pool to improve startup times?" : YES/NO.

The main things that bother me about these custom instance pools:

  • Seems unnecessary. We are already using a pool of azure cloud resources. Why am I defining a pool within a pool?

  • Cost burden. I'm having to pay for a pool of idle VM's. Every customer should not have to pay the cost of having their own distinct set of idle VM's, especially if lots of other customers have the same idle VM images as well. It is likely that everyone is paying many times over for the same underlying set of idle VM's.

  • Configuration management. What a lot of hoopla! In the right scenarios, none of this configuration cruft shouldn't be needed (ie. when using a common DB runtime, CPU, RAM, and core selection). There should be a simple checkbox where you can choose to use the standard instance pool.


What am I missing? Is this just a new feature request that nobody has ever considered? Or is there some design principal that I'm overlooking?

azure-databricks
· 2
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Hello @DavidBeavon-2754 and thank you for your question.

I would like to address your assumption that all VM's run on shared resources. This is not necessarily the case.
It is true there are compute options running on shared infrastructure, but there are also isolated compute options. In addition to these two , there is spot computing.

0 Votes 0 ·

Why shouldn't databricks allow us to use shared infrastructure, pulling our cluster nodes from a common pool?

The devops build agents are a good basis for comparison. When a devops build is kicked off, it runs on an agent which is already warmed up and ready to go. It is just a matter of seconds until useful work is being done.

In contrast to devops it takes minutes before an azure-databricks cluster is ready for use. Even with the help of "instance pools" it takes about 2 mins before custom code starts being executed. And without the instance pools, it can be 4 or 5 mins, which seems pretty unacceptable.


0 Votes 0 ·

0 Answers