Dear,
This week I've had issues with data deduplication twice.
I've configured the following
Server 1, Volume D:\ > General purpose file server older than 30 days, Disk size 750GB
Server 1, Volume E:\ > General purpose file server old than 3 days, Disk size 200GB
Server 2, Volume D:\ > General purpose file server older than 30 days, Disk size 750GB
Server 2, Volume E:\ > General purpose file server old than 3 days, Disk size 200GB
Server 1 i the primary server among DFS targets & full mesh topology has been configured in DFSr.
Last Monday the D:\ colume on Server 1 was completely full, I noticed that the chunkstore had contained it's size, but the data on the disk was also much larger than expected. On Server 2 it was clear that there was a large difference in size. A Garbage collection job didn't show any result and an Optimization job could not be run as there was no disk space available. Eventually I extended the disk with 100GB in order to be able to run an Optimization job which shrinked the disk size.
Today I face the same issue on the E: volume on Server 1. But I do not wish to extend the disk there it actually is not required. The size of this disk without data deduplication is around 130GB, with data deduplication working it's around 99GB. As Optimization is currently not working, the full 200GB is being consumed.
Do you have any tips on solving this issue without extending the disk capacity, and further, how come this issue occurs? I cannot find back logs in the event viewer within the deduplication logs that clarify this behavior.
Kind regards,
minQkel
Update: Garbage collection did clear 40GB of this space so I've been able to start the Optimization job properly. - The question does remain though, I have no clue why this issue occurs.