MoritzYBecker-6196 avatar image
0 Votes"
MoritzYBecker-6196 asked ·

Splitting of physical partitions in Cosmos DB

I have a container that contains less than 5 GB of data. From what I have read, a 5 GB container could reside on a single physical partition. However, this container is currently split across 4 physical partitions (shown as "partition key ranges" in "Metrics"). This has a negative impact on performance as throughput is split evenly between the physical partitions.

The container evolved as follows:

  1. The initially empty container was created with 20,000 RU/s.

  2. Approx. 5 GB of data was created in the container.

  3. The throughput was decreased to 400 RU/s.

  4. The throughput was manually changed multiple times between 10,000 RU/s and the current final value of 400 RU/s.


  1. Which of the steps above caused the container to be split across 4 physical partitions?

  2. If I had created the container with 10,000 RU/s initially, and made sure that the manually provisioned throughput subsequently only fluctuated between 400 and 10,000 RU/s, would the container have stayed on a single physical partition?

  3. Is there a way to "defragment" the container (apart from recreating it) so it gets stored on the minimum number of physical partitions actually required?

10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

1 Answer

AnuragSharma-MSFT avatar image
1 Vote"
AnuragSharma-MSFT answered ·

Hi @MoritzYBecker-6196, welcome to Microsoft QnA forum.

Implementation of Physical Partitions in Azure Cosmos DB is entirely managed by Azure Cosmos DB and we as an end user don't have control on it. However physical partitions per container depends on:

  1. The number of throughput provisioned (each individual physical partition can provide a throughput of up to 10,000 request units per second).

  2. The total data storage (each individual physical partition can store up to 50GB data).

If we consider your case, container was first created with 20,000 RUs which might have created multiple physical partitions. As you mentioned if you had created the container with less RUs, it might have created a single physical partition only.

From performance perspective, we can do 2 things:
1. Increase the RUs allocation on the container which would add to cost.
2. Recreate the container with less RUs(as mentioned by you) and migrate the data from older to this new container

There is already a feature request logged as part of user voice. You can upvote it:
Improve scaling down experience: remove redundant physical partitions

· 7 ·
10 |1000 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Thanks for your answer. I know that each physical partition can only provide up to 10,000 RU/s. But my container never had more than 20,000 RU/s and was never larger than 5 GB. How is it then possible that it got split into 4 physical partitions and not just 2?

0 Votes 0 ·

Hi @MoritzYBecker-6196, thanks for reverting back.

Ideally Azure Cosmos DB should have created just 2 physical partitions as RUs were limited to 20,000. Do you by any chance know that these 4 physical partitions are created at the time of creation of container or later on when documents got added to the container? Because as the number of logical partitions grew, cosmos might have created new physical partitions. Again it should not have happened.

0 Votes 0 ·

I don't know when the 4 physical partitions have been created, and AFAIK there is no way of finding out. All I can see from Metrics is that the throughput never exceeded 20,000. But it's important for us to find out how it happened and how to predict/prevent the creation of physical partitions so we can advise our clients accordingly on how to deploy our (Cosmos DB-based) product. The number of physical partitions has a huge impact on performance and cost. Could someone from MSFT look into it more deeply as this may be a bug?

0 Votes 0 ·
Show more comments