To respond to LOOKUP messages when a neighbor is searching for a particular key, a DRT node is required to maintain a Route Entry Cache. A DRT cloud has no scale limitation, and could consist of millions of registrations; because it would be prohibitively expensive, in both bandwidth and memory, for every node to cache every single registration in a cloud of this size, the selection of neighbors to cache is of critical importance to ensure a reasonable trade-off between search time, bandwidth, and memory consumption.
The specific cache organization is an implementation detail,<4> but the following are requirements that MUST be met by a cache implementation:
It MUST be such that a search for a single registration in the cloud can be implemented on the order of Log10(n) LOOKUP message operations, where n is the total number of registrations in the cloud. (For example, the cache structure described in [PAST] has this property.)
The cache MUST logically include all entries in each of the node's leaf sets.
This constraint on the cache ensures that there is always a discoverable path to a registered key. DRT nodes, which are also Publishers, also use this constraint to detect and repair partitions in the cloud, section 220.127.116.11.1.
A DRT node MUST maintain a cache of at least 10 route entries (or all route entries in the cloud, if there are fewer than 10), of the keys of which are evenly distributed around the number space.
This constraint ensures that when a neighbor is performing a bootstrap operation and sends SOLICIT messages for entries for this node's cache, it is possible to ADVERTISE an even distribution of candidates.