快取Caching

快取是改善系統的效能和延展性的常用技術。Caching is a common technique that aims to improve the performance and scalability of a system. 這樣做的方法是暫時將經常存取的資料複製到所在位置接近應用程式的快速儲存體中。It does this by temporarily copying frequently accessed data to fast storage that's located close to the application. 如果這個快速資料儲存體比原始來源的位置更接近應用程式,則快取可以透過更快速地提供資料,大幅改善用戶端應用程式的回應時間。If this fast data storage is located closer to the application than the original source, then caching can significantly improve response times for client applications by serving data more quickly.

當應用程式執行個體重複讀取相同資料時,快取是最有效率的做法,特別是在原始資料存放區符合下列所有條件時:Caching is most effective when a client instance repeatedly reads the same data, especially if all the following conditions apply to the original data store:

  • 它會保持相對靜態。It remains relatively static.
  • 相較於快取速度,它的速度較慢。It's slow compared to the speed of the cache.
  • 受限於高階爭用。It's subject to a high level of contention.
  • 當網路延遲導致存取速度變慢時它位於很遠的地方。It's far away when network latency can cause access to be slow.

分散式應用程式中的快取Caching in distributed applications

當快取資料有下列情況時,分散式應用程式通常會實作下列兩個策略或其中之一:Distributed applications typically implement either or both of the following strategies when caching data:

  • 使用私人快取,其中資料會保留於執行應用程式或服務執行個體的本機電腦上。Using a private cache, where data is held locally on the computer that's running an instance of an application or service.
  • 使用共用快取,作為可由多個進程和電腦存取的一般來源。Using a shared cache, serving as a common source that can be accessed by multiple processes and machines.

在這兩種情況下,都可以執行用戶端和伺服器端的快取。In both cases, caching can be performed client-side and server-side. 用戶端快取是透過為系統提供使用者介面的處理程序來完成,例如網頁瀏覽器或傳統型應用程式。Client-side caching is done by the process that provides the user interface for a system, such as a web browser or desktop application. 伺服器端快取是透過提供從遠端執行的商務服務的處理程序來完成。Server-side caching is done by the process that provides the business services that are running remotely.

私人快取Private caching

最基本的快取類型是記憶體內部存放區。The most basic type of cache is an in-memory store. 其會保留於單一處理程序的位址空間中,且可由該處理程序中執行的程式碼直接存取。It's held in the address space of a single process and accessed directly by the code that runs in that process. 這種快取類型很快就可以存取。This type of cache is quick to access. 它也可以提供有效的方法來儲存適中數量的靜態資料,因為快取的大小通常會受限於裝載進程之電腦上可用的記憶體數量。It can also provide an effective means for storing modest amounts of static data, since the size of a cache is typically constrained by the amount of memory available on the machine hosting the process.

如果您需要快取比記憶體中實際可用更多的資訊,您可以將快取資料寫入本機檔案系統。If you need to cache more information than is physically possible in memory, you can write cached data to the local file system. 這會比在記憶體中保留的資料更慢,但仍應比在網路上抓取資料更快且更可靠。This will be slower to access than data held in memory, but should still be faster and more reliable than retrieving data across a network.

如果您同時執行多個使用此模型的應用程式執行個體,每個應用程式執行個體都會有自己獨立的快取,並保留它自己的資料複本。If you have multiple instances of an application that uses this model running concurrently, each application instance has its own independent cache holding its own copy of the data.

將快取視為過去某個時間點的原始資料快照。Think of a cache as a snapshot of the original data at some point in the past. 如果此資料並非靜態,很可能不同的應用程式執行個體會在其快取中保留不同版本的資料。If this data is not static, it is likely that different application instances hold different versions of the data in their caches. 因此,這些執行個體所執行的相同查詢就會傳回不同的結果,如圖 1 所示。Therefore, the same query performed by these instances can return different results, as shown in Figure 1.

在不同應用程式執行個體中使用記憶體中快取

圖1:在不同的應用程式實例中使用記憶體中快取。Figure 1: Using an in-memory cache in different instances of an application.

共用快取Shared caching

使用共用快取有助於減少每個快取中可能有不同資料的疑慮,這種情況會在記憶體內部快取中發生。Using a shared cache can help alleviate concerns that data might differ in each cache, which can occur with in-memory caching. 共用快取可確保不同的應用程式執行個體會看到相同的快取資料檢視。Shared caching ensures that different application instances see the same view of cached data. 這樣做的方法是尋找個別位置中的快取,其通常會裝載為個別服務的一部分,如圖 2 所示。It does this by locating the cache in a separate location, typically hosted as part of a separate service, as shown in Figure 2.

使用共用快取

圖2:使用共用快取。Figure 2: Using a shared cache.

共用快取方式的重要優點是它提供的延展性。An important benefit of the shared caching approach is the scalability it provides. 許多共用快取服務都是使用伺服器的叢集來執行,並使用軟體以透明方式將資料分散到整個叢集中。Many shared cache services are implemented by using a cluster of servers and use software to distribute the data across the cluster transparently. 應用程式執行個體只會將要求傳送至快取服務。An application instance simply sends a request to the cache service. 基礎結構會決定叢集中快取資料的位置。The underlying infrastructure determines the location of the cached data in the cluster. 您可以輕鬆地透過新增更多伺服器來擴充快取。You can easily scale the cache by adding more servers.

共用快取方式有兩個主要缺點︰There are two main disadvantages of the shared caching approach:

  • 快取的存取速度變慢,因為它不再本機保留於每個應用程式執行個體中。The cache is slower to access because it is no longer held locally to each application instance.
  • 實作個別快取服務的需求可能會增加方案的複雜性。The requirement to implement a separate cache service might add complexity to the solution.

使用快取的考量Considerations for using caching

下列各節更詳細地說明設計和使用快取的考量。The following sections describe in more detail the considerations for designing and using a cache.

決定快取資料的時機Decide when to cache data

快取可大幅改善效能、延展性和可用性。Caching can dramatically improve performance, scalability, and availability. 您擁有的資料越多且需要存取此資料的使用者數量越大,快取的優點就會更顯著。The more data that you have and the larger the number of users that need to access this data, the greater the benefits of caching become. 這是因為快取可減少與處理原始資料存放區中大量並行要求相關聯的延遲和爭用。That's because caching reduces the latency and contention that's associated with handling large volumes of concurrent requests in the original data store.

例如,資料庫可能支援有限的並行連接數目。For example, a database might support a limited number of concurrent connections. 但是會從共用快取擷取資料而非基礎資料庫,即使目前可用的連接數目已用盡,還是能讓用戶端應用程式存取此資料。Retrieving data from a shared cache, however, rather than the underlying database, makes it possible for a client application to access this data even if the number of available connections is currently exhausted. 此外,如果資料庫變成無法使用,用戶端應用程式或許能夠透過使用快取中保留的資料繼續執行。Additionally, if the database becomes unavailable, client applications might be able to continue by using the data that's held in the cache.

請考慮快取經常讀取但很少修改的資料 (例如,讀取作業比例高於寫入作業比例的資料)。Consider caching data that is read frequently but modified infrequently (for example, data that has a higher proportion of read operations than write operations). 不過,不建議您將快取用作授權的重要資訊存放區,However, we don't recommend that you use the cache as the authoritative store of critical information. 而是確定應用程式無法承受遺失的所有變更一律會儲存至永續性資料存放區Instead, ensure that all changes that your application cannot afford to lose are always saved to a persistent data store. 這表示,如果快取無法使用,您的應用程式還是可以使用資料存放區繼續執行作業,而您將不會遺失重要資訊。This means that if the cache is unavailable, your application can still continue to operate by using the data store, and you won't lose important information.

判斷如何有效率地快取資料Determine how to cache data effectively

有效地使用快取的關鍵在於決定最適合快取的資料,以及最適合快取的時間。The key to using a cache effectively lies in determining the most appropriate data to cache, and caching it at the appropriate time. 資料會在應用程式第一次擷取時隨選新增至快取。The data can be added to the cache on demand the first time it is retrieved by an application. 這表示應用程式僅需從資料存放區擷取一次資料,而後續存取可透過使用快取來滿足。This means that the application needs to fetch the data only once from the data store, and that subsequent access can be satisfied by using the cache.

或者,通常當應用程式啟動 (稱為植入) 時,快取會事先填入部分或完整資料。Alternatively, a cache can be partially or fully populated with data in advance, typically when the application starts (an approach known as seeding). 不過,可能不建議實作植入大型快取,因為這種方法在應用程式開始執行時,會在原始資料存放區上造成突發性高負載。However, it might not be advisable to implement seeding for a large cache because this approach can impose a sudden, high load on the original data store when the application starts running.

通常分析使用模式可協助決定是否要預先填入完整或部分快取,以及選擇要快取的資料。Often an analysis of usage patterns can help you decide whether to fully or partially prepopulate a cache, and to choose the data to cache. 例如,針對定期 (也許是每天) 使用應用程式的客戶,使用靜態使用者設定檔資料植入快取相當實用,但不適用於一週僅使用一次應用程式的客戶。For example, it can be useful to seed the cache with the static user profile data for customers who use the application regularly (perhaps every day), but not for customers who use the application only once a week.

快取通常適用於固定或很少變更的資料。Caching typically works well with data that is immutable or that changes infrequently. 範例包含參考資訊,例如,電子商務應用程式中的產品和定價資訊,或高建構成本的共用靜態資源。Examples include reference information such as product and pricing information in an e-commerce application, or shared static resources that are costly to construct. 此資料的部分或全部可在應用程式啟動時載入到快取,以便將資源需求降到最低並改善效能。Some or all of this data can be loaded into the cache at application startup to minimize demand on resources and to improve performance. 可能也會有適當的背景進程,以定期更新快取中的參考資料,以確保其為最新狀態,或在參考資料變更時重新整理快取。It might also be appropriate to have a background process that periodically updates reference data in the cache to ensure it is up-to-date, or that refreshes the cache when reference data changes.

儘管此考量有某些例外狀況 (請參閱本文後續的<快取高動態資料>一節,以取得詳細資訊),快取還是較不適合動態資料。Caching is less useful for dynamic data, although there are some exceptions to this consideration (see the section Cache highly dynamic data later in this article for more information). 當原始資料定期變更時,快取的資訊會變得很快過時,或為了讓快取與原始資料存放區同步而產生額外負荷,因而導致降低快取的效益。When the original data changes regularly, either the cached information becomes stale very quickly or the overhead of synchronizing the cache with the original data store reduces the effectiveness of caching.

請注意,快取並未包含實體的完整資料。Note that a cache does not have to include the complete data for an entity. 舉例來說,如果資料項目代表多重值物件 (例如,具有名稱、地址和帳戶餘額的銀行客戶),則其中某些項目可能會維持靜態 (例如名稱和地址),而其他項目 (例如帳戶餘額) 可能更為動態。For example, if a data item represents a multivalued object such as a bank customer with a name, address, and account balance, some of these elements might remain static (such as the name and address), while others (such as the account balance) might be more dynamic. 在這些情況下,快取資料的靜態部分,以及只有在需要時擷取 (或計算) 剩餘資訊就相當實用。In these situations, it can be useful to cache the static portions of the data and retrieve (or calculate) only the remaining information when it is required.

建議您執行效能測試和流量分析,以判斷快取的預先載入或視需要載入(或兩者的組合)是否適用。We recommend that you carry out performance testing and usage analysis to determine whether prepopulating or on-demand loading of the cache, or a combination of both, is appropriate. 此決定應取決於資料的變動性和使用模式。The decision should be based on the volatility and usage pattern of the data. 快取使用率和效能分析在遇到繁重負載的應用程式中特別重要,而且必須具備高度擴充性。Cache utilization and performance analysis are particularly important in applications that encounter heavy loads and must be highly scalable. 例如,在高度可調整的案例中,適合用來植入快取,以在尖峰時間減少資料存放區的負載。For example, in highly scalable scenarios it might make sense to seed the cache to reduce the load on the data store at peak times.

快取也可以用來在應用程式執行時避免重複計算。Caching can also be used to avoid repeating computations while the application is running. 如果作業轉換資料,或執行複雜的計算,則它可以在快取中儲存作業的結果。If an operation transforms data or performs a complicated calculation, it can save the results of the operation in the cache. 如果後續需要相同的計算,應用程式就能直接從快取擷取結果。If the same calculation is required afterward, the application can simply retrieve the results from the cache.

應用程式可以修改保留在快取中的資料。An application can modify data that's held in a cache. 不過,建議您將快取視為可能會隨時消失的暫時性資料存放區。However, we recommend thinking of the cache as a transient data store that could disappear at any time. 不要將重要資料只儲存於快取中,而是確定您也會維護原始資料存放區內的資訊。Do not store valuable data in the cache only; make sure that you maintain the information in the original data store as well. 這表示,如果快取變成無法使用,就會降低遺失資料的機會。This means that if the cache becomes unavailable, you minimize the chance of losing data.

快取高動態資料Cache highly dynamic data

當您在持續性資料存放區中儲存快速變更的資訊時,可能會對系統造成額外負荷。When you store rapidly changing information in a persistent data store, it can impose an overhead on the system. 例如,假設有一個持續報告狀態或其他度量單位的裝置。For example, consider a device that continually reports status or some other measurement. 在快取資訊幾乎一律會過期的情況下,如果應用程式選擇不要快取此資料,則從資料存放區中儲存和擷取此資訊時相同的考量也適用;If an application chooses not to cache this data on the basis that the cached information will nearly always be outdated, then the same consideration could be true when storing and retrieving this information from the data store. 而且在儲存和擷取此資料時,它可能已經變更。In the time it takes to save and fetch this data, it might have changed.

在這類情況下,請考慮在快取 (而非永續性資料存放區) 中直接儲存動態資訊的優點。In a situation such as this, consider the benefits of storing the dynamic information directly in the cache instead of in the persistent data store. 如果資料是非關鍵的,且不需要進行審核,那麼偶爾的變更是否遺失也不重要。If the data is noncritical and does not require auditing, then it doesn't matter if the occasional change is lost.

管理快取中的資料到期Manage data expiration in a cache

在多數情況下,保留於快取中的資料即是保留在原始資料存放區中的資料複本。In most cases, data that's held in a cache is a copy of data that's held in the original data store. 原始資料存放區中的資料可能會在快取後變更,並造成快取資料過時。The data in the original data store might change after it was cached, causing the cached data to become stale. 許多快取系統可讓您設定快取過期資料,並減少資料可能過期的期間。Many caching systems enable you to configure the cache to expire data and reduce the period for which data may be out of date.

快取的資料過期時,會從快取中移除,而且應用程式必須從原始資料存放區取出資料 (它可以將新提取的資訊放回快取) 。When cached data expires, it's removed from the cache, and the application must retrieve the data from the original data store (it can put the newly fetched information back into cache). 當您設定快取時,您可以設定預設到期原則。You can set a default expiration policy when you configure the cache. 在許多快取服務中,當您以程式設計方式將物件儲存在快取時,您也可以規定個別物件的到期時間。In many cache services, you can also stipulate the expiration period for individual objects when you store them programmatically in the cache. 某些快取可讓您將到期時間指定為絕對值,或是如果未在指定時間內存取,則會從快取移除項目的變化值。Some caches enable you to specify the expiration period as an absolute value, or as a sliding value that causes the item to be removed from the cache if it is not accessed within the specified time. 這項設定會覆寫任何快取範圍的到期原則,但僅會針對指定的物件。This setting overrides any cache-wide expiration policy, but only for the specified objects.

注意

仔細思考快取的到期時間和其包含的物件。Consider the expiration period for the cache and the objects that it contains carefully. 如果設定的時間太短,則物件會太快過期,而且會減少您使用快取的優點。If you make it too short, objects will expire too quickly and you will reduce the benefits of using the cache. 如果設定的時間太長,則您需負擔資料過時的風險。If you make the period too long, you risk the data becoming stale.

如果允許資料長時間維持駐留狀態,則快取也可能會填滿。It's also possible that the cache might fill up if data is allowed to remain resident for a long time. 在此情況下,將新項目新增至快取的任何要求可能導致某些項目遭到強制移除,這稱為收回程序。In this case, any requests to add new items to the cache might cause some items to be forcibly removed in a process known as eviction. 快取服務通常會根據最近最少使用的 (LRU) 原則來收回資料,但您通常可以覆寫此原則,並防止項目遭到收回。Cache services typically evict data on a least-recently-used (LRU) basis, but you can usually override this policy and prevent items from being evicted. 不過,如果您採用這種方法,則需承擔超過快取中可用記憶體的風險。However, if you adopt this approach, you risk exceeding the memory that's available in the cache. 應用程式在嘗試將項目新增至快取時將會失敗並發生例外狀況。An application that attempts to add an item to the cache will fail with an exception.

某些快取實作可能會提供其他收回原則。Some caching implementations might provide additional eviction policies. 有數個類型的收回原則。There are several types of eviction policies. 這些包括:These include:

  • 最近使用的原則 (不再需要資料的預期狀況)。A most-recently-used policy (in the expectation that the data will not be required again).
  • 先進先出原則 (先收回最舊的資料),A first-in-first-out policy (oldest data is evicted first).
  • 以觸發事件 (例如要修改的資料) 為基礎的明確移除原則。An explicit removal policy based on a triggered event (such as the data being modified).

使用戶端快取中的資料失效Invalidate data in a client-side cache

保留在用戶端快取中的資料,通常會被視為不屬於提供資料給用戶端的服務支援。Data that's held in a client-side cache is generally considered to be outside the auspices of the service that provides the data to the client. 服務不能直接強制用戶端新增或移除來自用戶端快取的資訊。A service cannot directly force a client to add or remove information from a client-side cache.

這表示使用設定不良之快取的用戶端,可能會繼續使用過時的資訊。This means that it's possible for a client that uses a poorly configured cache to continue using outdated information. 例如,如果未正確實作快取的到期原則,當原始資料來源中的資訊已變更時,用戶端可能會使用本機快取的過時資訊。For example, if the expiration policies of the cache aren't properly implemented, a client might use outdated information that's cached locally when the information in the original data source has changed.

如果您正在建置透過 HTTP 連接提供資料的 Web 應用程式,您可以隱含地強制 Web 用戶端 (例如瀏覽器或 Web Proxy) 來擷取最新資訊。If you are building a web application that serves data over an HTTP connection, you can implicitly force a web client (such as a browser or web proxy) to fetch the most recent information. 如果資源是透過變更該資源的 URI 來更新,您就可以執行此動作。You can do this if a resource is updated by a change in the URI of that resource. Web 用戶端通常會使用資源的 URI 做為用戶端快取中的索引鍵,因此,如果 URI 變更,就會導致 Web 用戶端忽略任何先前快取的資源版本,並改為擷取新的版本。Web clients typically use the URI of a resource as the key in the client-side cache, so if the URI changes, the web client ignores any previously cached versions of a resource and fetches the new version instead.

管理快取中的並行存取Managing concurrency in a cache

快取的設計目的通常是由應用程式的多個執行個體所共用。Caches are often designed to be shared by multiple instances of an application. 每個應用程式執行個體可以讀取和修改快取中的資料。Each application instance can read and modify data in the cache. 因此,對於任何共用資料存放區中所發生相同並行問題也適用於快取。Consequently, the same concurrency issues that arise with any shared data store also apply to a cache. 在應用程式必須修改快取中所保留資料的情況下,您可能需要確保由應用程式的某個執行個體所進行的更新不會覆寫另一個執行個體所做的變更。In a situation where an application needs to modify data that's held in the cache, you might need to ensure that updates made by one instance of the application do not overwrite the changes made by another instance.

根據資料的本質和衝突的可能性,您可以採用兩種方式其中之一來進行並行存取:Depending on the nature of the data and the likelihood of collisions, you can adopt one of two approaches to concurrency:

  • 開放式Optimistic. 就在更新資料之前,應用程式會檢查快取中的資料自擷取之後是否已變更。Immediately prior to updating the data, the application checks to see whether the data in the cache has changed since it was retrieved. 如果資料仍然相同,則可以進行變更。If the data is still the same, the change can be made. 否則,應用程式必須決定是否要進行更新 Otherwise, the application has to decide whether to update it. (推動這項決策的商務邏輯將會是特定應用程式。 ) 此方法適用于不常發生更新,或不太可能發生衝突的情況。(The business logic that drives this decision will be application-specific.) This approach is suitable for situations where updates are infrequent, or where collisions are unlikely to occur.
  • 封閉式Pessimistic. 當應用程式擷取資料時會在快取中鎖定資料,以避免另一個執行個體變更資料。When it retrieves the data, the application locks it in the cache to prevent another instance from changing it. 此處理程序可確保不會發生衝突,但它們也會封鎖其他需要處理相同資料的執行個體。This process ensures that collisions cannot occur, but they can also block other instances that need to process the same data. 封閉式並行存取會影響解決方案的延展性,建議僅用於短期作業。Pessimistic concurrency can affect the scalability of a solution and is recommended only for short-lived operations. 這種方法可能適用於很可能發生衝突的情況,特別是當應用程式更新快取中的多個項目,且必須確保這些變更會一致套用的情況。This approach might be appropriate for situations where collisions are more likely, especially if an application updates multiple items in the cache and must ensure that these changes are applied consistently.

實作高可用性和延展性,並改善效能Implement high availability and scalability, and improve performance

避免使用快取做為資料的主要儲存機制;這個角色應該是用來填入快取的原始資料存放區。Avoid using a cache as the primary repository of data; this is the role of the original data store from which the cache is populated. 原始資料存放區會負責確保資料的持續性。The original data store is responsible for ensuring the persistence of the data.

請小心不要將共用快取服務可用性上的重要相依性引入您的解決方案。Be careful not to introduce critical dependencies on the availability of a shared cache service into your solutions. 如果提供共用快取的服務無法使用,則應用程式應該能夠繼續運作。An application should be able to continue functioning if the service that provides the shared cache is unavailable. 在等候快取服務繼續時,應用程式應該不會變成沒有回應或失敗。The application should not become unresponsive or fail while waiting for the cache service to resume.

因此,應用程式必須準備好偵測快取服務的可用性,並在無法存取快取時回復為原始資料存放區。Therefore, the application must be prepared to detect the availability of the cache service and fall back to the original data store if the cache is inaccessible. Circuit Breaker Pattern (斷路器模式) 在處理這種案例時相當實用。The Circuit-Breaker pattern is useful for handling this scenario. 提供快取的服務可以復原,而且當服務可供使用時,快取會在從原始資料存放區讀取資料時重新填入,並遵循下列策略,例如 Cache-Aside Pattern (另行快取模式)The service that provides the cache can be recovered, and once it becomes available, the cache can be repopulated as data is read from the original data store, following a strategy such as the Cache-aside pattern.

但是,如果應用程式在快取暫時無法使用時回復至原始資料存放區,可能會影響系統的擴充性。However, system scalability may be affected if the application falls back to the original data store when the cache is temporarily unavailable. 復原資料存放區時,原始資料存放區可能忙於處理資料要求,因而導致逾時和連接失敗。While the data store is being recovered, the original data store could be swamped with requests for data, resulting in timeouts and failed connections.

請考慮在應用程式的每個執行個體中實作本機私人快取,以及所有應用程式執行個體存取的共用快取。Consider implementing a local, private cache in each instance of an application, together with the shared cache that all application instances access. 當應用程式擷取項目時,可能會先檢查它的本機快取,再來是共用快取,最後則是原始資料存放區。When the application retrieves an item, it can check first in its local cache, then in the shared cache, and finally in the original data store. 本機快取可以使用共用快取中的資料填入,或者如果共用快取無法使用,則可使用資料庫中的資料填入。The local cache can be populated using the data in either the shared cache, or in the database if the shared cache is unavailable.

此方法需要仔細的組態設定,以防止本機快取相對於共用快取變得太過時。This approach requires careful configuration to prevent the local cache from becoming too stale with respect to the shared cache. 但若無法連接到共用快取時,則會使用本機快取做為緩衝區。However, the local cache acts as a buffer if the shared cache is unreachable. 圖 3 會顯示此結構。Figure 3 shows this structure.

使用本機私人快取搭配共用快取

圖3:使用本機私用快取搭配共用快取。Figure 3: Using a local private cache with a shared cache.

為了支援保留相當長期資料的大型快取,某些快取服務會在快取無法使用時,提供實作自動容錯移轉的高可用性選項。To support large caches that hold relatively long-lived data, some cache services provide a high-availability option that implements automatic failover if the cache becomes unavailable. 這種方法通常涉及將儲存於主要快取伺服器上的快取資料複寫到次要快取伺服器,且在主要伺服器故障或遺失連接時切換到次要伺服器。This approach typically involves replicating the cached data that's stored on a primary cache server to a secondary cache server, and switching to the secondary server if the primary server fails or connectivity is lost.

為了減少與寫入多個目的地相關聯的延遲,當資料寫入主要伺服器上的快取時,複寫到次要伺服器的作業可能會以非同步方式發生。To reduce the latency that's associated with writing to multiple destinations, the replication to the secondary server might occur asynchronously when data is written to the cache on the primary server. 這種方法會導致某些快取的資訊可能會在發生錯誤時遺失,但此資料的比例應該小於快取的整體大小。This approach leads to the possibility that some cached information might be lost in the event of a failure, but the proportion of this data should be small compared to the overall size of the cache.

如果共用快取很大,則在節點上分割快取資料可能很有幫助,可減少爭用的機會,並改善延展性。If a shared cache is large, it might be beneficial to partition the cached data across nodes to reduce the chances of contention and improve scalability. 許多的共用快取支援動態新增 (與移除) 節點,以及重新平衡分割之間資料的功能。Many shared caches support the ability to dynamically add (and remove) nodes and rebalance the data across partitions. 這種方法可能牽涉到叢集,節點的集合會向用戶端應用程式顯示為隨選即用的單一快取。This approach might involve clustering, in which the collection of nodes is presented to client applications as a seamless, single cache. 但內部的資料會分散在節點之間,並遵循某些預先定義的散發策略,以便平均地平衡負載。Internally, however, the data is dispersed between nodes following a predefined distribution strategy that balances the load evenly. 若要深入了解可能的資料分割策略,請參閱資料分割指引For more information about possible partitioning strategies, see Data partitioning guidance.

叢集也可以進一步提高快取的可用性。Clustering can also increase the availability of the cache. 如果節點失敗,則快取的其餘部分仍然可以存取。If a node fails, the remainder of the cache is still accessible. 使用叢集時經常搭配複寫和容錯移轉。Clustering is frequently used in conjunction with replication and failover. 每個節點皆可複寫,且複本會在節點失敗時快速上線。Each node can be replicated, and the replica can be quickly brought online if the node fails.

許多讀取和寫入作業可能涉及單一資料值或物件。Many read and write operations are likely to involve single data values or objects. 不過,有時可能必須快速儲存或擷取大量資料。However, at times it might be necessary to store or retrieve large volumes of data quickly. 例如,植入快取可能涉及將數百或數千個項目寫入至快取。For example, seeding a cache could involve writing hundreds or thousands of items to the cache. 應用程式也可能需要從快取中擷取大量相關項目,以做為相同要求的一部分。An application might also need to retrieve a large number of related items from the cache as part of the same request.

許多大型快取會針對這些用途提供批次作業。Many large-scale caches provide batch operations for these purposes. 這讓用戶端應用程式可將大量項目封裝成單一要求,並在執行大量的小型要求時減少相關聯的過度負荷。This enables a client application to package up a large volume of items into a single request and reduces the overhead that's associated with performing a large number of small requests.

快取和最終一致性Caching and eventual consistency

若要讓另行快取模式能夠運作,填入快取的應用程式執行個體,必須可存取最新且一致的資料版本。For the cache-aside pattern to work, the instance of the application that populates the cache must have access to the most recent and consistent version of the data. 在實作最終一致性 (例如複寫的資料存放區) 的系統中,這可能不會如此。In a system that implements eventual consistency (such as a replicated data store) this might not be the case.

一個應用程式的執行個體可以修改資料項目,使該項目的快取版本失效。One instance of an application could modify a data item and invalidate the cached version of that item. 應用程式的另一個執行個體可能會嘗試從快取中讀取此項目 (這會導致快取遺漏),因此它會從資料存放區讀取資料,並將它新增至快取。Another instance of the application might attempt to read this item from a cache, which causes a cache-miss, so it reads the data from the data store and adds it to the cache. 不過,如果資料存放區沒有完全與其他複本同步,則應用程式執行個體可能會使用舊值來讀取並填入快取。However, if the data store has not been fully synchronized with the other replicas, the application instance could read and populate the cache with the old value.

如需處理資料一致性的詳細資訊,請參閱資料一致性入門For more information about handling data consistency, see the Data consistency primer.

保護快取的資料Protect cached data

無論您使用的快取服務為何,都要考慮如何保護保留於快取中的資料,以避免未經授權的存取。Irrespective of the cache service you use, consider how to protect the data that's held in the cache from unauthorized access. 其中有兩個主要的考量:There are two main concerns:

  • 快取中資料的隱私權。The privacy of the data in the cache.
  • 資料的隱私權可說是在快取和使用快取的應用程式之間流動。The privacy of data as it flows between the cache and the application that's using the cache.

為了保護快取中的資料,快取服務可能會實作一項驗證機制,以要求應用程式指定下列內容︰To protect data in the cache, the cache service might implement an authentication mechanism that requires that applications specify the following:

  • 哪些身分識別可以存取快取中的資料。Which identities can access data in the cache.
  • 允許這些身分識別執行哪些作業 (讀取和寫入)。Which operations (read and write) that these identities are allowed to perform.

為了在讀取和寫入資料時減少相關聯的過度負荷,在授與身分識別寫入和/或讀取快取的存取權之後,該身分識別就能使用快取中的任何資料。To reduce overhead that's associated with reading and writing data, after an identity has been granted write and/or read access to the cache, that identity can use any data in the cache.

如果您需要限制快取資料子集的存取權,可執行下列其中一個動作:If you need to restrict access to subsets of the cached data, you can do one of the following:

  • 將快取分割為分割區 (使用不同的快取伺服器),並且僅授與身分識別應允許其使用之分割區的存取權。Split the cache into partitions (by using different cache servers) and only grant access to identities for the partitions that they should be allowed to use.
  • 使用不同的金鑰來加密每個子集中的資料,並且僅向應該具有每個子集存取權的身分識別提供加密金鑰。Encrypt the data in each subset by using different keys, and provide the encryption keys only to identities that should have access to each subset. 用戶端應用程式或許仍能擷取快取中的所有資料,但它只能夠解密具有金鑰的資料。A client application might still be able to retrieve all of the data in the cache, but it will only be able to decrypt the data for which it has the keys.

您也可以在資料於快取中流動進出時給予保護。You must also protect the data as it flows in and out of the cache. 若要執行此動作,您可以依靠網路基礎結構所提供的安全性功能,用戶端應用程式會使用該基礎結構來連接至快取。To do this, you depend on the security features provided by the network infrastructure that client applications use to connect to the cache. 如果在裝載用戶端應用程式的相同組織中使用現場伺服器來實作快取,則網路本身的隔離可能不需要您採取其他步驟。If the cache is implemented using an on-site server within the same organization that hosts the client applications, then the isolation of the network itself might not require you to take additional steps. 如果快取位於遠端且需要公用網路 (例如網際網路) 上的 TCP 或 HTTP 連接,請考慮實作 SSL。If the cache is located remotely and requires a TCP or HTTP connection over a public network (such as the Internet), consider implementing SSL.

在 Azure 中實作快取的考量Considerations for implementing caching in Azure

Azure Cache for Redis 是在 Azure 資料中心內執行為服務的開放原始碼 Redis 快取。Azure Cache for Redis is an implementation of the open source Redis cache that runs as a service in an Azure datacenter. 它提供可從任何 Azure 應用程式存取的快取服務,無論應用程式實作為雲端服務、網站,或在 Azure 虛擬機器中。It provides a caching service that can be accessed from any Azure application, whether the application is implemented as a cloud service, a website, or inside an Azure virtual machine. 擁有適當存取金鑰的用戶端應用程式可以共用快取。Caches can be shared by client applications that have the appropriate access key.

Azure Cache for Redis 是高效能的快取解決方案,可提供可用性、擴充性和安全性。Azure Cache for Redis is a high-performance caching solution that provides availability, scalability and security. 它通常會以分散到一或多部專用電腦上的服務來執行。It typically runs as a service spread across one or more dedicated machines. 它會試著盡可能在記憶體中儲存最多資訊,以確保快速存取。It attempts to store as much information as it can in memory to ensure fast access. 此架構可透過減少執行緩慢的 I/O 作業的需求,用於提供低度延遲和高輸送量。This architecture is intended to provide low latency and high throughput by reducing the need to perform slow I/O operations.

Azure Cache for Redis 與許多用戶端應用程式所使用的各種 Api 都相容。Azure Cache for Redis is compatible with many of the various APIs that are used by client applications. 如果您現有的應用程式已經使用 Azure Cache for Redis 在內部部署環境中執行,則 Azure Cache for Redis 會在雲端中提供快取的快速遷移路徑。If you have existing applications that already use Azure Cache for Redis running on-premises, the Azure Cache for Redis provides a quick migration path to caching in the cloud.

Redis 的功能Features of Redis

Redis 不僅是一種簡易快取伺服器。Redis is more than a simple cache server. 它提供分散式記憶體內部資料庫搭配廣泛的命令集,可支援許多常見案例。It provides a distributed in-memory database with an extensive command set that supports many common scenarios. 相關說明請參閱本文件稍後的<使用 Redis 快取>一節。These are described later in this document, in the section Using Redis caching. 本節將摘要出一些 Redis 所提供的重要功能。This section summarizes some of the key features that Redis provides.

Redis 作為記憶體中資料庫Redis as an in-memory database

Redis 同時支援讀取和寫入作業。Redis supports both read and write operations. 在 Redis 中,寫入可以透過在本機快照集檔案或僅限附加的記錄檔中定期儲存,來提供保護以防止系統失敗。In Redis, writes can be protected from system failure either by being stored periodically in a local snapshot file or in an append-only log file. 這與許多快取不同 (應將其視為暫時性資料存放區)。This is not the case in many caches (which should be considered transitory data stores).

所有寫入皆為非同步,且不會封鎖用戶端讀取和寫入資料。All writes are asynchronous and do not block clients from reading and writing data. 當 Redis 開始執行時,它會從快照集或記錄檔讀取資料,並使用它來建構記憶體中快取。When Redis starts running, it reads the data from the snapshot or log file and uses it to construct the in-memory cache. 如需詳細資訊,請參閱 Redis 網站上的 Redis Persistence (Redis 持續性)For more information, see Redis persistence on the Redis website.

注意

Redis 不保證所有寫入在發生災難性失敗時皆能儲存,但最糟的情況是,您可能只會遺失幾秒鐘的資料價值。Redis does not guarantee that all writes will be saved in the event of a catastrophic failure, but at worst you might lose only a few seconds worth of data. 請記住,快取並不適合用作授權資料來源,這是應用程式的責任,使用快取以確保重要資料已成功儲存到適當資料存放區。Remember that a cache is not intended to act as an authoritative data source, and it is the responsibility of the applications using the cache to ensure that critical data is saved successfully to an appropriate data store. 如需詳細資訊,請參閱另行快取 模式For more information, see the Cache-aside pattern.

Redis 資料類型Redis data types

Redis 是索引鍵值存放區,其中的值可以包含簡易類型或複雜資料結構,例如雜湊、清單,以及集合。Redis is a key-value store, where values can contain simple types or complex data structures such as hashes, lists, and sets. 它針對這些資料類型支援一組不可部分完成的作業。It supports a set of atomic operations on these data types. 索引鍵可以永久存在或標記為有限的存留時間,屆時索引鍵及其對應值會自動從快取中移除。Keys can be permanent or tagged with a limited time-to-live, at which point the key and its corresponding value are automatically removed from the cache. 如需有關 Redis 索引鍵和值的詳細資訊,請造訪 Redis 網站上的 An introduction to Redis data types and abstractions (Redis 資料類型與抽象概念簡介) 頁面。For more information about Redis keys and values, visit the page An introduction to Redis data types and abstractions on the Redis website.

Redis 複寫和叢集Redis replication and clustering

Redis 支援主要/次級複寫,以協助確保可用性並維持輸送量。Redis supports primary/subordinate replication to help ensure availability and maintain throughput. Redis 主要節點的寫入作業會複寫到一或多個從屬節點。Write operations to a Redis primary node are replicated to one or more subordinate nodes. 讀取作業可由主要或任何附屬服務提供服務。Read operations can be served by the primary or any of the subordinates.

如果是網路磁碟分割,則在重新建立連接時,從屬專案可以繼續提供資料,然後以透明的方式重新同步處理主要複本。In the event of a network partition, subordinates can continue to serve data and then transparently resynchronize with the primary when the connection is reestablished. 如需詳細資訊,請造訪 Redis 網站上的 複寫 (英文) 頁面。For further details, visit the Replication page on the Redis website.

Redis 也提供叢集,其可讓您以透明方式在伺服器之間將資料分割成分區並分散負載。Redis also provides clustering, which enables you to transparently partition data into shards across servers and spread the load. 此功能可改善延展性,因為可新增新的 Redis 伺服器,並增加資料可重新分割的快取大小。This feature improves scalability, because new Redis servers can be added and the data repartitioned as the size of the cache increases.

此外,您可以使用主要/次級複寫來複寫叢集中的每部伺服器。Furthermore, each server in the cluster can be replicated by using primary/subordinate replication. 這可確保整個叢集中每個節點的可用性This ensures availability across each node in the cluster. 如需有關叢集和分區化的詳細資訊,請造訪 Redis 網站上的 Redis 叢集教學課程頁面For more information about clustering and sharding, visit the Redis cluster tutorial page on the Redis website.

Redis 記憶體使用Redis memory use

Redis 快取的大小有限,其取決於主機電腦上可用的資源。A Redis cache has a finite size that depends on the resources available on the host computer. 當您設定的 Redis 伺服器時,您可以指定伺服器可以使用的記憶體最大數量。When you configure a Redis server, you can specify the maximum amount of memory it can use. 您也可以設定 Redis 快取中的索引鍵,使其具有到期時間,屆時會自動從快取中移除它。You can also configure a key in a Redis cache to have an expiration time, after which it is automatically removed from the cache. 此功能可協助避免記憶體內部快取中填滿老舊或過時的資料。This feature can help prevent the in-memory cache from filling with old or stale data.

當記憶體填滿時,Redis 可以遵循原則數目自動收回索引鍵及其值。As memory fills up, Redis can automatically evict keys and their values by following a number of policies. 預設值是 LRU (最近最少使用的),但您也可以選取其他原則,例如,隨機收回索引鍵或完全關閉收回 (在此情況下,當快取已滿時,嘗試將項目新增至快取就會失敗)。The default is LRU (least recently used), but you can also select other policies such as evicting keys at random or turning off eviction altogether (in which, case attempts to add items to the cache fail if it is full). Using Redis as an LRU cache (使用 Redis 作為 LRU 快取) 頁面提供了詳細資訊。The page Using Redis as an LRU cache provides more information.

Redis 交易與批次Redis transactions and batches

Redis 可讓用戶端應用程式提交一系列的作業,以便在快取中讀取和寫入資料以作為不可部分完成的交易。Redis enables a client application to submit a series of operations that read and write data in the cache as an atomic transaction. 交易中的所有命令保證會循序執行,且其他並行用戶端所發出的命令將不會在兩者之間交互編排。All the commands in the transaction are guaranteed to run sequentially, and no commands issued by other concurrent clients will be interwoven between them.

不過,這些不是實際上的交易,因為關聯式資料庫會執行它們。However, these are not true transactions as a relational database would perform them. 交易處理包含兩個階段 -- 第一個是在命令排入佇列時,而第二個是在命令執行時。Transaction processing consists of two stages--the first is when the commands are queued, and the second is when the commands are run. 在命令佇列階段期間,用戶端會提交構成交易的命令。During the command queuing stage, the commands that comprise the transaction are submitted by the client. 如果此時發生某種形式的錯誤 (例如,語法錯誤或參數數目不正確),則 Redis 會拒絕處理整個交易並捨棄它。If some sort of error occurs at this point (such as a syntax error, or the wrong number of parameters) then Redis refuses to process the entire transaction and discards it.

在執行階段期間,Redis 會循序執行每個佇列中的命令。During the run phase, Redis performs each queued command in sequence. 如果命令在這個階段期間失敗,Redis 會繼續執行下一個佇列中的命令,且不會復原任何已經執行之命令的效果。If a command fails during this phase, Redis continues with the next queued command and does not roll back the effects of any commands that have already been run. 這個簡化的交易形式可協助維護效能,並避免爭用所造成的效能問題。This simplified form of transaction helps to maintain performance and avoid performance problems that are caused by contention.

Redis 會實作一種開放式鎖定,以便協助維護一致性。Redis does implement a form of optimistic locking to assist in maintaining consistency. 如需使用 Redis 進行交易和鎖定的詳細資訊,請造訪 Redis 網站上的 交易頁面 (英文)For detailed information about transactions and locking with Redis, visit the Transactions page on the Redis website.

Redis 也支援非交易式批次處理的要求。Redis also supports nontransactional batching of requests. 用戶端用來將命令傳送到 Redis 伺服器的 Redis 通訊協定,可讓用戶端將以相同要求的一部分來傳送一系列作業。The Redis protocol that clients use to send commands to a Redis server enables a client to send a series of operations as part of the same request. 這有助於減少在網路上的封包分割。This can help to reduce packet fragmentation on the network. 處理批次時,系統會執行每個命令。When the batch is processed, each command is performed. 如果這些命令的任何格式不正確,將會遭到拒絕 (這不會在交易中發生),但將會執行剩餘的命令。If any of these commands are malformed, they will be rejected (which doesn't happen with a transaction), but the remaining commands will be performed. 此外,也不保證批次中處理命令的相關順序。There is also no guarantee about the order in which the commands in the batch will be processed.

Redis 安全性Redis security

Redis 純粹著重於提供資料的快速存取,並設計為在受信任的環境中執行,且僅能由受信任的用戶端存取。Redis is focused purely on providing fast access to data, and is designed to run inside a trusted environment that can be accessed only by trusted clients. Redis 支援以密碼驗證為基礎的有限安全性模型Redis supports a limited security model based on password authentication. (可完全移除驗證,但不建議這樣做)。(It is possible to remove authentication completely, although we don't recommend this.)

所有已驗證的用戶端會共用相同的全域密碼,並存取相同資源。All authenticated clients share the same global password and have access to the same resources. 如果您需要更完整的登入安全性,您必須在 Redis 伺服器之前實作自己的安全性層級,且所有用戶端要求應通過這個額外的層級。If you need more comprehensive sign-in security, you must implement your own security layer in front of the Redis server, and all client requests should pass through this additional layer. Redis 不應直接向不受信任或未經驗證的用戶端公開。Redis should not be directly exposed to untrusted or unauthenticated clients.

您可以透過停用命令或重新命名命令 (僅為有權限的用戶端提供新名稱) 來限制命令的存取。You can restrict access to commands by disabling them or renaming them (and by providing only privileged clients with the new names).

Redis 不直接支援任何形式的資料加密,因此所有編碼必須由用戶端應用程式執行。Redis does not directly support any form of data encryption, so all encoding must be performed by client applications. 此外,Redis 不提供任何形式的傳輸安全性。Additionally, Redis does not provide any form of transport security. 如果您基於資料在網路上流通而需要保護資料,建議實作 SSL Proxy。If you need to protect data as it flows across the network, we recommend implementing an SSL proxy.

如需詳細資訊,請造訪 Redis 網站上的 Redis Security (Redis 安全性) 頁面。For more information, visit the Redis security page on the Redis website.

注意

Azure Cache for Redis 提供自己的安全性階層,用戶端會透過此層級進行連接。Azure Cache for Redis provides its own security layer through which clients connect. 基本的 Redis 伺服器不會向公用網路公開。The underlying Redis servers are not exposed to the public network.

Azure Redis 快取Azure Redis cache

Azure Cache for Redis 可讓您存取裝載于 Azure 資料中心的 Redis 伺服器。Azure Cache for Redis provides access to Redis servers that are hosted at an Azure datacenter. 它可做為提供存取控制與安全性的表面。It acts as a façade that provides access control and security. 您可以使用 Azure 入口網站來佈建快取。You can provision a cache by using the Azure portal.

此入口網站會提供數個預先定義的組態。The portal provides a number of predefined configurations. 範圍可從執行專用服務的 53 GB 快取,用來支援 SSL 通訊 (適用於隱私權) 以及主要/下層複寫搭配 99.9% 可用性的 SLA,到共用硬體上所執行不含複寫 (無可用性保證) 的 250 MB 快取。These range from a 53 GB cache running as a dedicated service that supports SSL communications (for privacy) and master/subordinate replication with an SLA of 99.9% availability, down to a 250 MB cache without replication (no availability guarantees) running on shared hardware.

您也可以使用 Azure 入口網站來設定快取的收回原則,並透過將使用者新增至所提供角色來控制快取的存取權。Using the Azure portal, you can also configure the eviction policy of the cache, and control access to the cache by adding users to the roles provided. 這些角色會定義成員可執行的作業,其中包含擁有者、參與者和讀取者。These roles, which define the operations that members can perform, include Owner, Contributor, and Reader. 例如,擁有者角色成員擁有快取 (包含安全性) 及其內容的完整控制權,參與者角色成員則可以讀取和寫入快取中的資訊,而讀取者角色成員僅能從快取擷取資料。For example, members of the Owner role have complete control over the cache (including security) and its contents, members of the Contributor role can read and write information in the cache, and members of the Reader role can only retrieve data from the cache.

多數的系統管理工作都是透過 Azure 入口網站來執行。Most administrative tasks are performed through the Azure portal. 基於這個理由,許多 Redis 標準版中可用的系統管理命令皆無法使用,包含以程式設計方式修改組態、關閉 Redis 伺服器、設定其他下層項目,或強制將資料儲存到磁碟等功能。For this reason, many of the administrative commands that are available in the standard version of Redis are not available, including the ability to modify the configuration programmatically, shut down the Redis server, configure additional subordinates, or forcibly save data to disk.

Azure 入口網站包含便利的圖形化顯示,可讓您監視快取的效能。The Azure portal includes a convenient graphical display that enables you to monitor the performance of the cache. 例如,您可以檢視建立的連接數目、執行的要求數目、讀取與寫入的資料量,以及快取命中與快取遺漏的數目。For example, you can view the number of connections being made, the number of requests being performed, the volume of reads and writes, and the number of cache hits versus cache misses. 您可以使用此資訊來判斷快取的效率,且可視需要切換至不同的組態,或變更收回原則。Using this information, you can determine the effectiveness of the cache and if necessary, switch to a different configuration or change the eviction policy.

此外,如果一個或多個關鍵度量落在預期的範圍之外,您可以建立將電子郵件訊息傳送給系統管理員的警示。Additionally, you can create alerts that send email messages to an administrator if one or more critical metrics fall outside of an expected range. 例如,如果快取遺漏數目在最後一小時超過指定的值,則您可能想要警告系統管理員,因為這表示快取可能太小或資料可能太快收回。For example, you might want to alert an administrator if the number of cache misses exceeds a specified value in the last hour, because it means the cache might be too small or data might be being evicted too quickly.

您也可以監視 CPU、記憶體和快取的網路使用量。You can also monitor the CPU, memory, and network usage for the cache.

如需有關如何建立及設定 Azure Cache for Redis 的詳細資訊和範例,請流覽 Azure blog 上 Azure Cache for Redis 的頁面 膝上For further information and examples showing how to create and configure an Azure Cache for Redis, visit the page Lap around Azure Cache for Redis on the Azure blog.

快取工作階段狀態和 HTML 輸出Caching session state and HTML output

如果您要建立使用 Azure web 角色執行的 ASP.NET web 應用程式,您可以將會話狀態資訊和 HTML 輸出儲存在 Azure Cache for Redis 中。If you're building ASP.NET web applications that run by using Azure web roles, you can save session state information and HTML output in an Azure Cache for Redis. Azure Cache for Redis 的會話狀態提供者可讓您在 ASP.NET web 應用程式的不同實例之間共用會話資訊,而且對於無法使用用戶端與伺服器親和性的 web 伺服陣列情況很有用,而且記憶體中的快取會話資料並不適當。The session state provider for Azure Cache for Redis enables you to share session information between different instances of an ASP.NET web application, and is very useful in web farm situations where client-server affinity is not available and caching session data in-memory would not be appropriate.

使用會話狀態提供者搭配 Azure Cache for Redis 可提供數個優點,包括:Using the session state provider with Azure Cache for Redis delivers several benefits, including:

  • 與大量的 ASP.NET Web 應用程式執行個體共用工作階段狀態。Sharing session state with a large number of instances of ASP.NET web applications.
  • 提供改善的延展性。Providing improved scalability.
  • 針對多個讀取者和單一寫入者的相同工作階段狀態資料,支援受控制的並行存取權。Supporting controlled, concurrent access to the same session state data for multiple readers and a single writer.
  • 使用壓縮來節省記憶體,並改善網路效能。Using compression to save memory and improve network performance.

如需詳細資訊,請參閱 Azure Cache for Redis 的 ASP.NET 會話狀態提供者For more information, see ASP.NET session state provider for Azure Cache for Redis.

注意

請勿將會話狀態提供者用於 Azure Cache for Redis 與在 Azure 環境外部執行的 ASP.NET 應用程式。Do not use the session state provider for Azure Cache for Redis with ASP.NET applications that run outside of the Azure environment. 從 Azure 外部存取快取的延遲會消除快取資料的效能優勢。The latency of accessing the cache from outside of Azure can eliminate the performance benefits of caching data.

同樣地,Azure Cache for Redis 的輸出快取提供者可讓您儲存 ASP.NET web 應用程式所產生的 HTTP 回應。Similarly, the output cache provider for Azure Cache for Redis enables you to save the HTTP responses generated by an ASP.NET web application. 使用輸出快取提供者搭配 Azure Cache for Redis 可以改善呈現複雜 HTML 輸出的應用程式回應時間。Using the output cache provider with Azure Cache for Redis can improve the response times of applications that render complex HTML output. 產生類似回應的應用程式實例可以使用快取中的共用輸出片段,而不是產生此 HTML 輸出再次。Application instances that generate similar responses can use the shared output fragments in the cache rather than generating this HTML output afresh. 如需詳細資訊,請參閱 Azure Cache for Redis 的 ASP.NET 輸出快取提供者For more information, see ASP.NET output cache provider for Azure Cache for Redis.

建置自訂的 Redis 快取Building a custom Redis cache

Azure Cache for Redis 作為基礎 Redis 伺服器的外觀。Azure Cache for Redis acts as a façade to the underlying Redis servers. 如果您需要 Azure Redis 快取未涵蓋的進階組態 (例如大於 53 GB 的快取),您可以使用 Azure 虛擬機器來建置並裝載您自己的 Redis 伺服器。If you require an advanced configuration that is not covered by the Azure Redis cache (such as a cache bigger than 53 GB) you can build and host your own Redis servers by using Azure virtual machines.

這是很複雜的程式,因為您可能需要建立數個 Vm 作為主要和次級節點,才能執行複寫。This is a potentially complex process because you might need to create several VMs to act as primary and subordinate nodes if you want to implement replication. 此外,如果您想要建立叢集,則需要多個主要複本和次級伺服器。Furthermore, if you wish to create a cluster, then you need multiple primaries and subordinate servers. 提供高可用性和擴充性的最小叢集複寫拓撲,包含至少六個以三組主要/次級伺服器組成的 Vm (叢集至少必須包含三個主要節點) 。A minimal clustered replication topology that provides a high degree of availability and scalability comprises at least six VMs organized as three pairs of primary/subordinate servers (a cluster must contain at least three primary nodes).

每個主要/次級配對都應位於接近的位置,以將延遲降至最低。Each primary/subordinate pair should be located close together to minimize latency. 但若您想要找出靠近應用程式,且該應用程式最有可能使用的快取資料,每一組配對可以在位於不同區域的不同 Azure 資料中心內執行。However, each set of pairs can be running in different Azure datacenters located in different regions, if you wish to locate cached data close to the applications that are most likely to use it. 如需建置和設定作為 Azure VM 執行的 Redis 節點範例,請參閱在 Azure 中的 CentOS Linux VM 上執行 RedisFor an example of building and configuring a Redis node running as an Azure VM, see Running Redis on a CentOS Linux VM in Azure.

注意

如果您以這種方式執行自己的 Redis 快取,您必須負責監視、管理及保護服務。If you implement your own Redis cache in this way, you are responsible for monitoring, managing, and securing the service.

Redis 快取的資料分割Partitioning a Redis cache

資料分割快取涉及多部電腦之間的分割快取。Partitioning the cache involves splitting the cache across multiple computers. 此結構具有數個優點,可使用單一快取伺服器,包含:This structure gives you several advantages over using a single cache server, including:

  • 建立比儲存在單一伺服器上更大的快取。Creating a cache that is much bigger than can be stored on a single server.
  • 將資料分散到伺服器,並改善可用性。Distributing data across servers, improving availability. 如果某一部伺服器失敗或變成無法存取,其所保存的資料就會無法使用,但剩餘伺服器上的資料仍然可以存取。If one server fails or becomes inaccessible, the data that it holds is unavailable, but the data on the remaining servers can still be accessed. 至於快取並不重要,因為快取資料只是暫時保留在資料庫中的資料複本。For a cache, this is not crucial because the cached data is only a transient copy of the data that's held in a database. 變成無法存取之伺服器上的快取資料可以改為在不同的伺服器上進行快取。Cached data on a server that becomes inaccessible can be cached on a different server instead.
  • 在伺服器之間分配負載,藉此改善效能和延展性。Spreading the load across servers, thereby improving performance and scalability.
  • 將資料配置在鄰近供使用者存取的地理位置,因而可降低延遲。Geolocating data close to the users that access it, thus reducing latency.

至於快取,最常見的資料分割形式是分區化。For a cache, the most common form of partitioning is sharding. 在此策略中,每個分割區 (或分區) 都是獨立存在的 Redis 快取。In this strategy, each partition (or shard) is a Redis cache in its own right. 資料會使用分區化邏輯導向至特定的資料分割,可以使用各種不同的方法來散發資料。Data is directed to a specific partition by using sharding logic, which can use a variety of approaches to distribute the data. Sharding Pattern (分區化模式) 提供有關實作分區化的詳細資訊。The Sharding pattern provides more information about implementing sharding.

若要在 Redis 快取中實作資料分割,您可以採用下列其中一種方法:To implement partitioning in a Redis cache, you can take one of the following approaches:

  • 伺服器端查詢路由。Server-side query routing. 在這項技術中,用戶端應用程式將要求傳送到任何包含快取的 Redis 伺服器 (可能是最接近的伺服器)。In this technique, a client application sends a request to any of the Redis servers that comprise the cache (probably the closest server). 每個 Redis 伺服器會儲存描述資料分割的中繼資料,該資料分割會保存並同時包含關於其他伺服器上磁碟分割的資訊。Each Redis server stores metadata that describes the partition that it holds, and also contains information about which partitions are located on other servers. Redis 伺服器會檢查用戶端要求。The Redis server examines the client request. 如果它可以在本機解析,就會執行要求的作業,If it can be resolved locally, it will perform the requested operation. 否則會將要求轉送到適當的伺服器。Otherwise it will forward the request on to the appropriate server. 此模型會由 Redis 叢集來實作,而更詳細的描述位於 Redis 網站上的 Redis 叢集教學課程頁面。This model is implemented by Redis clustering, and is described in more detail on the Redis cluster tutorial page on the Redis website. Redis 叢集對用戶端應用程式而言是透明的,其他 Redis 伺服器可以加入至叢集 (資料會重新分割),而不需要重新設定用戶端。Redis clustering is transparent to client applications, and additional Redis servers can be added to the cluster (and the data re-partitioned) without requiring that you reconfigure the clients.
  • 用戶端資料分割。Client-side partitioning. 在此模型中,用戶端應用程式包含將要求路由傳送至適當 Redis 伺服器的邏輯 (可能是以程式庫的形式)。In this model, the client application contains logic (possibly in the form of a library) that routes requests to the appropriate Redis server. 這種方法可以搭配 Azure Cache for Redis 使用。This approach can be used with Azure Cache for Redis. 為每個資料分割建立多個 Azure Cache for Redis (一個) 並執行用戶端邏輯,以將要求路由至正確的快取。Create multiple Azure Cache for Redis (one for each data partition) and implement the client-side logic that routes the requests to the correct cache. 如果資料分割配置變更 (如果建立了其他 Azure Cache for Redis (例如) ),用戶端應用程式可能需要重新設定。If the partitioning scheme changes (if additional Azure Cache for Redis are created, for example), client applications might need to be reconfigured.
  • Proxy 輔助資料分割。Proxy-assisted partitioning. 在此配置中,用戶端應用程式會將要求傳送至一個了解如何分割資料的中繼 Proxy 服務,然後將要求路由傳送至適當的 Redis 伺服器。In this scheme, client applications send requests to an intermediary proxy service which understands how the data is partitioned and then routes the request to the appropriate Redis server. 這種方法也可以搭配 Azure Cache for Redis 使用;proxy 服務可以實作為 Azure 雲端服務。This approach can also be used with Azure Cache for Redis; the proxy service can be implemented as an Azure cloud service. 這種方法需要一層額外的複雜性來實作服務,而且要求的執行時間可能比使用用戶端資料分割的時間更長。This approach requires an additional level of complexity to implement the service, and requests might take longer to perform than using client-side partitioning.

Redis 網站上的 資料分割:如何在多個 Redis 執行個體上劃分資料 頁面會提供關於使用 Redis 實作資料分割的進一步資訊。The page Partitioning: how to split data among multiple Redis instances on the Redis website provides further information about implementing partitioning with Redis.

實作 Redis 快取用戶端應用程式Implement Redis cache client applications

Redis 支援以多種程式設計語言撰寫的用戶端應用程式。Redis supports client applications written in numerous programming languages. 如果您要使用.NET Framework 來建置新的應用程式,建議的方法是使用 StackExchange.Redis 用戶端程式庫。If you are building new applications by using the .NET Framework, the recommended approach is to use the StackExchange.Redis client library. 此程式庫會提供.NET Framework 物件模型,用來擷取詳細資料以便連接到 Redis 伺服器連接、傳送命令,以及接收回應。This library provides a .NET Framework object model that abstracts the details for connecting to a Redis server, sending commands, and receiving responses. 它是在 Visual Studio 中作為 NuGet 封裝提供使用。It is available in Visual Studio as a NuGet package. 您可以使用這個相同的程式庫來連線至 Azure Cache for Redis,或裝載在 VM 上的自訂 Redis 快取。You can use this same library to connect to an Azure Cache for Redis, or a custom Redis cache hosted on a VM.

若要連接到 Redis 伺服器,您可以使用 ConnectionMultiplexer 類別的靜態 Connect 方法。To connect to a Redis server you use the static Connect method of the ConnectionMultiplexer class. 這個方法建立的連接適用於整個用戶端應用程式的存留期,而且相同的連接可供多個並行執行緒使用。The connection that this method creates is designed to be used throughout the lifetime of the client application, and the same connection can be used by multiple concurrent threads. 每次您執行 Redis 操作時請勿重新連接並中斷連接,因為這會降低效能。Do not reconnect and disconnect each time you perform a Redis operation because this can degrade performance.

您可以指定連接參數,例如 Redis 主機的位址和密碼。You can specify the connection parameters, such as the address of the Redis host and the password. 如果您使用 Azure Cache for Redis,則密碼是使用 Azure 入口網站為 Azure Cache for Redis 產生的主要或次要金鑰。If you are using Azure Cache for Redis, the password is either the primary or secondary key that is generated for Azure Cache for Redis by using the Azure portal.

在您已連線到 Redis 伺服器之後,可以在作為快取的 Redis 資料庫上取得控制代碼。After you have connected to the Redis server, you can obtain a handle on the Redis database that acts as the cache. Redis 連線提供 GetDatabase 方法來執行這項操作。The Redis connection provides the GetDatabase method to do this. 您可以使用 StringGetStringSet 方法,從快取擷取項目並在快取中儲存資料。You can then retrieve items from the cache and store data in the cache by using the StringGet and StringSet methods. 這些方法預期索引鍵要做為參數使用,並傳回快取中具有相符值的項目 (StringGet),或利用此索引鍵將項目新增至快取 (StringSet)。These methods expect a key as a parameter, and return the item either in the cache that has a matching value (StringGet) or add the item to the cache with this key (StringSet).

根據 Redis 伺服器的位置,許多作業可能會造成一些延遲,而要求會傳輸到伺服器,且回應會傳回給用戶端。Depending on the location of the Redis server, many operations might incur some latency while a request is transmitted to the server and a response is returned to the client. StackExchange 程式庫提供許多方法的非同步版本,它會進行公開以便協助用戶端應用程式保持回應狀態。The StackExchange library provides asynchronous versions of many of the methods that it exposes to help client applications remain responsive. 這些方法支援 .NET Framework 中以工作為 基礎的非同步模式These methods support the Task-based Asynchronous pattern in the .NET Framework.

下列程式碼片段示範一個名為 RetrieveItem的方法。The following code snippet shows a method named RetrieveItem. 其中說明根據 Redis 和 StackExchange 程式庫實作另行快取模式的範例。It illustrates an implementation of the cache-aside pattern based on Redis and the StackExchange library. 此方法接受字串索引值,並透過呼叫 StringGetAsync 方法 (StringGet 的非同步版本),嘗試從 Redis 快取中擷取對應的項目。The method takes a string key value and attempts to retrieve the corresponding item from the Redis cache by calling the StringGetAsync method (the asynchronous version of StringGet).

如果找不到項目,它會使用 GetItemFromDataSourceAsync 方法 (這是本機方法,不屬於 StackExchange 程式庫的一部分),從基礎資料來源擷取項目。If the item is not found, it is fetched from the underlying data source using the GetItemFromDataSourceAsync method (which is a local method and not part of the StackExchange library). 然後使用 StringSetAsync 方法新增至快取,以便下一次可以更快速地擷取項目。It's then added to the cache by using the StringSetAsync method so it can be retrieved more quickly next time.

// Connect to the Azure Redis cache
ConfigurationOptions config = new ConfigurationOptions();
config.EndPoints.Add("<your DNS name>.redis.cache.windows.net");
config.Password = "<Redis cache key from management portal>";
ConnectionMultiplexer redisHostConnection = ConnectionMultiplexer.Connect(config);
IDatabase cache = redisHostConnection.GetDatabase();
...
private async Task<string> RetrieveItem(string itemKey)
{
    // Attempt to retrieve the item from the Redis cache
    string itemValue = await cache.StringGetAsync(itemKey);

    // If the value returned is null, the item was not found in the cache
    // So retrieve the item from the data source and add it to the cache
    if (itemValue == null)
    {
        itemValue = await GetItemFromDataSourceAsync(itemKey);
        await cache.StringSetAsync(itemKey, itemValue);
    }

    // Return the item
    return itemValue;
}

StringGetStringSet 方法不會限制要擷取或儲存字串值。The StringGet and StringSet methods are not restricted to retrieving or storing string values. 它們可以接受任何序列化為位元組陣列的項目。They can take any item that is serialized as an array of bytes. 如果您需要儲存 .NET 物件,可將其序列化為位元組資料流,並使用 StringSet 方法,將它寫入快取。If you need to save a .NET object, you can serialize it as a byte stream and use the StringSet method to write it to the cache.

同樣地,您可以使用 StringGet 方法,從快取中讀取物件,並將其還原序列化為 .NET 物件。Similarly, you can read an object from the cache by using the StringGet method and deserializing it as a .NET object. 下列程式碼示範一組 IDatabase 介面的擴充方法 (Redis 連接的 GetDatabase 方法會傳回 IDatabase 物件),以及一些使用這些方法來讀取和寫入 BlogPost 物件至快取的範例程式碼:The following code shows a set of extension methods for the IDatabase interface (the GetDatabase method of a Redis connection returns an IDatabase object), and some sample code that uses these methods to read and write a BlogPost object to the cache:

public static class RedisCacheExtensions
{
    public static async Task<T> GetAsync<T>(this IDatabase cache, string key)
    {
        return Deserialize<T>(await cache.StringGetAsync(key));
    }

    public static async Task<object> GetAsync(this IDatabase cache, string key)
    {
        return Deserialize<object>(await cache.StringGetAsync(key));
    }

    public static async Task SetAsync(this IDatabase cache, string key, object value)
    {
        await cache.StringSetAsync(key, Serialize(value));
    }

    static byte[] Serialize(object o)
    {
        byte[] objectDataAsStream = null;

        if (o != null)
        {
            BinaryFormatter binaryFormatter = new BinaryFormatter();
            using (MemoryStream memoryStream = new MemoryStream())
            {
                binaryFormatter.Serialize(memoryStream, o);
                objectDataAsStream = memoryStream.ToArray();
            }
        }

        return objectDataAsStream;
    }

    static T Deserialize<T>(byte[] stream)
    {
        T result = default(T);

        if (stream != null)
        {
            BinaryFormatter binaryFormatter = new BinaryFormatter();
            using (MemoryStream memoryStream = new MemoryStream(stream))
            {
                result = (T)binaryFormatter.Deserialize(memoryStream);
            }
        }

        return result;
    }
}

下列程式碼將說明一個稱為 RetrieveBlogPost 的方法,該方法會使用這些擴充方法,遵循另行快取模式來讀取和寫入序列化 BlogPost 物件至快取:The following code illustrates a method named RetrieveBlogPost that uses these extension methods to read and write a serializable BlogPost object to the cache following the cache-aside pattern:

// The BlogPost type
[Serializable]
public class BlogPost
{
    private HashSet<string> tags;

    public BlogPost(int id, string title, int score, IEnumerable<string> tags)
    {
        this.Id = id;
        this.Title = title;
        this.Score = score;
        this.tags = new HashSet<string>(tags);
    }

    public int Id { get; set; }
    public string Title { get; set; }
    public int Score { get; set; }
    public ICollection<string> Tags => this.tags;
}
...
private async Task<BlogPost> RetrieveBlogPost(string blogPostKey)
{
    BlogPost blogPost = await cache.GetAsync<BlogPost>(blogPostKey);
    if (blogPost == null)
    {
        blogPost = await GetBlogPostFromDataSourceAsync(blogPostKey);
        await cache.SetAsync(blogPostKey, blogPost);
    }

    return blogPost;
}

如果用戶端應用程式傳送多個非同步要求,Redis 支援命令流水線。Redis supports command pipelining if a client application sends multiple asynchronous requests. Redis 可以使用相同連接來多工處理要求,而非以嚴格的順序接收和回應命令。Redis can multiplex the requests using the same connection rather than receiving and responding to commands in a strict sequence.

此方法有助於透過更有效率地使用網路來減少延遲。This approach helps to reduce latency by making more efficient use of the network. 下列程式碼片段顯示的範例會同時擷取兩個客戶的詳細資料。The following code snippet shows an example that retrieves the details of two customers concurrently. 程式碼會提交兩個要求,然後執行某些其他處理程序 (未顯示),再等候接收結果。The code submits two requests and then performs some other processing (not shown) before waiting to receive the results. 快取物件的 Wait 方法類似於 .NET Framework Task.Wait 方法:The Wait method of the cache object is similar to the .NET Framework Task.Wait method:

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
var task1 = cache.StringGetAsync("customer:1");
var task2 = cache.StringGetAsync("customer:2");
...
var customer1 = cache.Wait(task1);
var customer2 = cache.Wait(task2);

如需撰寫可 Azure Cache for Redis 之用戶端應用程式的詳細資訊,請參閱 Azure Cache for Redis 檔For additional information on writing client applications that can the Azure Cache for Redis, see the Azure Cache for Redis documentation. StackExchange.Redis 中也有更多詳細資訊。More information is also available at StackExchange.Redis.

同一個網站上的 Pipelines and multiplexers (管線和多工器) 頁面提供更多關於非同步作業以及使用 Redis 與 StackExchange 程式庫透過管線進行傳送的資訊。The page Pipelines and multiplexers on the same website provides more information about asynchronous operations and pipelining with Redis and the StackExchange library.

使用 Redis 快取Using Redis caching

基於快取考量而使用 Redis 的最簡單方法是索引鍵/值配對,其中值是未解譯的字串,可包含任意長度的任何二進位資料The simplest use of Redis for caching concerns is key-value pairs where the value is an uninterpreted string of arbitrary length that can contain any binary data. (基本上可將它視為字串的位元組陣列)。(It is essentially an array of bytes that can be treated as a string). 此案例即本文稍早的<實作 Redis 快取用戶端應用程式>一節中所說明的情況。This scenario was illustrated in the section Implement Redis Cache client applications earlier in this article.

請注意,索引鍵也會包含未解譯的資料,因此您可以使用任何二進位資訊做為索引鍵。Note that keys also contain uninterpreted data, so you can use any binary information as the key. 但是索引鍵愈長,就需要花費愈多的空間來儲存,而且執行查閱作業的時間也會愈久。The longer the key is, however, the more space it will take to store, and the longer it will take to perform lookup operations. 如需可用性和維護的方便性,請仔細設計您的 keyspace 並使用有意義 (但非詳細資訊) 的索引鍵。For usability and ease of maintenance, design your keyspace carefully and use meaningful (but not verbose) keys.

例如,使用結構化的索引鍵,像是 "customer:100" 來表示客戶識別碼為 100 的索引鍵,而非僅以 "100" 表示。For example, use structured keys such as "customer:100" to represent the key for the customer with ID 100 rather than simply "100". 此配置可讓您輕鬆區別儲存不同資料類型之間的值。This scheme enables you to easily distinguish between values that store different data types. 例如,您也可以使用索引鍵 "orders:100" 來表示順序識別碼為 100 的索引鍵。For example, you could also use the key "orders:100" to represent the key for the order with ID 100.

除了一維的二進位字串,Redis 索引鍵/值配對中的值也可以保留更結構化資訊,包含清單、集合 (已排序和未排序),以及雜湊。Apart from one-dimensional binary strings, a value in a Redis key-value pair can also hold more structured information, including lists, sets (sorted and unsorted), and hashes. Redis 提供一組完整的命令集,可處理這些類型,且這些命令當中有許多可透過例如 StackExchange 的用戶端程式庫,使用於 .NET Framework 應用程式。Redis provides a comprehensive command set that can manipulate these types, and many of these commands are available to .NET Framework applications through a client library such as StackExchange. Redis 網站上的 Redis 資料類型和抽象概念簡介 (英文) 頁面可針對這些類型和可用來操作這些類型的命令,提供更詳細的概觀。The page An introduction to Redis data types and abstractions on the Redis website provides a more detailed overview of these types and the commands that you can use to manipulate them.

本節將針對這些資料類型和命令摘要說明一些常用案例。This section summarizes some common use cases for these data types and commands.

執行不可部分完成的作業和批次作業Perform atomic and batch operations

Redis 支援在字串值上進行一系列不可部分完成的取得和設定作業。Redis supports a series of atomic get-and-set operations on string values. 這些作業會移除使用個別 GETSET 命令時可能會發生的競爭危險。These operations remove the possible race hazards that might occur when using separate GET and SET commands. 可用的作業包含:The operations that are available include:

  • INCRINCRBYDECRDECRBY,用來在整數數字資料值上執行不可部分完成的遞增和遞減作業。INCR, INCRBY, DECR, and DECRBY, which perform atomic increment and decrement operations on integer numeric data values. StackExchange 程式庫會提供 IDatabase.StringIncrementAsyncIDatabase.StringDecrementAsync 方法的多載版本,來執行這些作業,並傳回儲存在快取中的結果值。The StackExchange library provides overloaded versions of the IDatabase.StringIncrementAsync and IDatabase.StringDecrementAsync methods to perform these operations and return the resulting value that is stored in the cache. 下列程式碼片段說明如何使用這些方法:The following code snippet illustrates how to use these methods:

    ConnectionMultiplexer redisHostConnection = ...;
    IDatabase cache = redisHostConnection.GetDatabase();
    ...
    await cache.StringSetAsync("data:counter", 99);
    ...
    long oldValue = await cache.StringIncrementAsync("data:counter");
    // Increment by 1 (the default)
    // oldValue should be 100
    
    long newValue = await cache.StringDecrementAsync("data:counter", 50);
    // Decrement by 50
    // newValue should be 50
    
  • GETSET用來擷取與索引鍵相關聯的值,並將其變更為新值。GETSET, which retrieves the value that's associated with a key and changes it to a new value. StackExchange 程式庫會透過 IDatabase.StringGetSetAsync 方法讓這項作業可供使用。The StackExchange library makes this operation available through the IDatabase.StringGetSetAsync method. 下方的程式碼片段會顯示這個方法的範例。The code snippet below shows an example of this method. 此程式碼會從先前範例中傳回目前與索引鍵 "data:counter" 相關聯的值。This code returns the current value that's associated with the key "data:counter" from the previous example. 然後將此索引鍵的值重設回零,這些全都是相同作業的一部分:Then it resets the value for this key back to zero, all as part of the same operation:

    ConnectionMultiplexer redisHostConnection = ...;
    IDatabase cache = redisHostConnection.GetDatabase();
    ...
    string oldValue = await cache.StringGetSetAsync("data:counter", 0);
    
  • MGETMSET 可以作為單一作業傳回或變更一組字串值。MGET and MSET, which can return or change a set of string values as a single operation. IDatabase.StringGetAsyncIDatabase.StringSetAsync 是多載方法,用來支援這項功能,如下列範例所示:The IDatabase.StringGetAsync and IDatabase.StringSetAsync methods are overloaded to support this functionality, as shown in the following example:

    ConnectionMultiplexer redisHostConnection = ...;
    IDatabase cache = redisHostConnection.GetDatabase();
    ...
    // Create a list of key-value pairs
    var keysAndValues =
        new List<KeyValuePair<RedisKey, RedisValue>>()
        {
            new KeyValuePair<RedisKey, RedisValue>("data:key1", "value1"),
            new KeyValuePair<RedisKey, RedisValue>("data:key99", "value2"),
            new KeyValuePair<RedisKey, RedisValue>("data:key322", "value3")
        };
    
    // Store the list of key-value pairs in the cache
    cache.StringSet(keysAndValues.ToArray());
    ...
    // Find all values that match a list of keys
    RedisKey[] keys = { "data:key1", "data:key99", "data:key322"};
    // values should contain { "value1", "value2", "value3" }
    RedisValue[] values = cache.StringGet(keys);
    
    

您也可以將多個作業結合到單一 Redis 交易,如同本文稍早的<Redis 交易與批次>一節中所述。You can also combine multiple operations into a single Redis transaction as described in the Redis transactions and batches section earlier in this article. StackExchange 程式庫可透過 ITransaction 介面提供交易支援。The StackExchange library provides support for transactions through the ITransaction interface.

您可以使用 IDatabase.CreateTransaction 方法來建立 ITransaction 物件。You create an ITransaction object by using the IDatabase.CreateTransaction method. 您使用 ITransaction 物件提供的方法來叫用交易的命令 。You invoke commands to the transaction by using the methods provided by the ITransaction object.

ITransaction 介面提供一組如同 IDatabase 介面所存取的類似方法,但不包含所有非同步的方法。The ITransaction interface provides access to a set of methods that's similar to those accessed by the IDatabase interface, except that all the methods are asynchronous. 這表示它們只會在叫用 ITransaction.Execute 方法時執行。This means that they are only performed when the ITransaction.Execute method is invoked. ITransaction.Execute 方法所傳回的值表示是否已成功建立交易 (true) 或建立失敗 (false)。The value that's returned by the ITransaction.Execute method indicates whether the transaction was created successfully (true) or if it failed (false).

下列程式碼片段顯示一個範例,即作為相同交易一部分的兩個遞增和遞減計數器:The following code snippet shows an example that increments and decrements two counters as part of the same transaction:

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
ITransaction transaction = cache.CreateTransaction();
var tx1 = transaction.StringIncrementAsync("data:counter1");
var tx2 = transaction.StringDecrementAsync("data:counter2");
bool result = transaction.Execute();
Console.WriteLine("Transaction {0}", result ? "succeeded" : "failed");
Console.WriteLine("Result of increment: {0}", tx1.Result);
Console.WriteLine("Result of decrement: {0}", tx2.Result);

請記住,Redis 交易不同於關聯式資料庫中的交易。Remember that Redis transactions are unlike transactions in relational databases. Execute 方法只會將所有命令排入佇列 (其中包含要執行的交易),而且如果其中任一個發生格式錯誤,則會中止交易。The Execute method simply queues all the commands that comprise the transaction to be run, and if any of them is malformed then the transaction is stopped. 如果已成功將所有命令排入佇列,就會以非同步方式執行每個命令。If all the commands have been queued successfully, each command runs asynchronously.

如果有任何命令失敗,其他命令仍會繼續處理。If any command fails, the others still continue processing. 如果您需要驗證命令是否已順利完成,您必須使用對應工作的 Result 屬性來擷取命令的結果,如上述範例所示。If you need to verify that a command has completed successfully, you must fetch the results of the command by using the Result property of the corresponding task, as shown in the example above. 讀取 Result 屬性將會封鎖呼叫執行緒,直到工作已完成。Reading the Result property will block the calling thread until the task has completed.

如需詳細資訊,請參閱 Redis 中的交易For more information, see Transactions in Redis.

執行批次作業時,您可以使用 StackExchange 程式庫的 IBatch 介面。When performing batch operations, you can use the IBatch interface of the StackExchange library. 這個介面提供一組如同 IDatabase 介面所存取的類似方法,但不包含所有非同步的方法。This interface provides access to a set of methods similar to those accessed by the IDatabase interface, except that all the methods are asynchronous.

您可以使用 IDatabase.CreateBatch 方法來建立 IBatch 物件,然後使用 IBatch.Execute 方法來執行批次,如下列範例所示。You create an IBatch object by using the IDatabase.CreateBatch method, and then run the batch by using the IBatch.Execute method, as shown in the following example. 這段程式碼僅會設定字串值,以及先前範例中相同計數器所使用的遞增和遞減,並會顯示下列結果:This code simply sets a string value, increments and decrements the same counters used in the previous example, and displays the results:

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
IBatch batch = cache.CreateBatch();
batch.StringSetAsync("data:key1", 11);
var t1 = batch.StringIncrementAsync("data:counter1");
var t2 = batch.StringDecrementAsync("data:counter2");
batch.Execute();
Console.WriteLine("{0}", t1.Result);
Console.WriteLine("{0}", t2.Result);

請務必了解這個方法不同於交易,如果批次中的命令因格式不正確而導致失敗,其他命令仍會執行。It is important to understand that unlike a transaction, if a command in a batch fails because it is malformed, the other commands might still run. IBatch.Execute 方法不會傳回成功或失敗的任何指示。The IBatch.Execute method does not return any indication of success or failure.

執行「射後不理」快取作業Perform fire and forget cache operations

Redis 透過使用命令旗標來支援「射後不理」作業。Redis supports fire and forget operations by using command flags. 在此情況下,用戶端僅初始化作業,但對於結果沒有興趣,且並不會等待命令完成。In this situation, the client simply initiates an operation but has no interest in the result and does not wait for the command to be completed. 下列範例示範如何執行 INCR 命令做為「射後不理」作業:The example below shows how to perform the INCR command as a fire and forget operation:

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
await cache.StringSetAsync("data:key1", 99);
...
cache.StringIncrement("data:key1", flags: CommandFlags.FireAndForget);

指定索引鍵自動到期Specify automatically expiring keys

當您在 Redis 快取中儲存項目時,您可以指定逾時,而在指定時間之後,項目會自動從快取中移除。When you store an item in a Redis cache, you can specify a timeout after which the item will be automatically removed from the cache. 您也可以在索引鍵到期之前,使用 TTL 命令來查詢剩餘時間。You can also query how much more time a key has before it expires by using the TTL command. 此命令可透過使用 IDatabase.KeyTimeToLive 方法而用於 StackExchange 應用程式。This command is available to StackExchange applications by using the IDatabase.KeyTimeToLive method.

下列程式碼片段示範如何在索引鍵上設定為 20 秒後到期,並查詢索引鍵的剩餘存留期:The following code snippet shows how to set an expiration time of 20 seconds on a key, and query the remaining lifetime of the key:

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
// Add a key with an expiration time of 20 seconds
await cache.StringSetAsync("data:key1", 99, TimeSpan.FromSeconds(20));
...
// Query how much time a key has left to live
// If the key has already expired, the KeyTimeToLive function returns a null
TimeSpan? expiry = cache.KeyTimeToLive("data:key1");

您也可以使用 StackExchange 程式庫中用來做為 KeyExpireAsync 方法的 EXPIRE 命令,為特定的日期和時間設定到期時間:You can also set the expiration time to a specific date and time by using the EXPIRE command, which is available in the StackExchange library as the KeyExpireAsync method:

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
// Add a key with an expiration date of midnight on 1st January 2015
await cache.StringSetAsync("data:key1", 99);
await cache.KeyExpireAsync("data:key1",
    new DateTime(2015, 1, 1, 0, 0, 0, DateTimeKind.Utc));
...

提示

您可以使用 DEL 命令手動從快取中移除項目,此命令可透過 StackExchange 程式庫用來作為 IDatabase.KeyDeleteAsync 方法。You can manually remove an item from the cache by using the DEL command, which is available through the StackExchange library as the IDatabase.KeyDeleteAsync method.

使用標記針對快取項目建立交叉相互關聯Use tags to cross-correlate cached items

Redis 集是共用單一索引鍵的多個項目集合。A Redis set is a collection of multiple items that share a single key. 您可以使用 SADD 命令來建立集合。You can create a set by using the SADD command. 您可以使用 SMEMBERS 命令來擷取集合中的項目。You can retrieve the items in a set by using the SMEMBERS command. StackExchange 程式庫會透過 IDatabase.SetAddAsync 方法實作 SADD 命令,並使用 IDatabase.SetMembersAsync 方法來實作 SMEMBERS 命令。The StackExchange library implements the SADD command with the IDatabase.SetAddAsync method, and the SMEMBERS command with the IDatabase.SetMembersAsync method.

您也可以使用 SDIFF (集合差異)、SINTER (集合交集) 和 SUNION (集合聯集) 命令結合現有集合來建立新的集合。You can also combine existing sets to create new sets by using the SDIFF (set difference), SINTER (set intersection), and SUNION (set union) commands. StackExchange 程式庫會以 IDatabase.SetCombineAsync 方法整合這些作業。The StackExchange library unifies these operations in the IDatabase.SetCombineAsync method. 這個方法的第一個參數會指定要執行的設定作業。The first parameter to this method specifies the set operation to perform.

下列程式碼片段顯示如何設定實用集合,以便用於快速儲存和擷取相關項目的集合。The following code snippets show how sets can be useful for quickly storing and retrieving collections of related items. 此程式碼會使用本文稍早的<實作 Redis 快取用戶端應用程式>一節中所說明的 BlogPost 類型。This code uses the BlogPost type that was described in the section Implement Redis Cache Client Applications earlier in this article.

BlogPost物件包含四個欄位: — 識別碼、標題、排名分數,以及標記集合。A BlogPost object contains four fields—an ID, a title, a ranking score, and a collection of tags. 下方的第一個程式碼片段示範用於填入 BlogPost 物件之 C# 清單的範例資料:The first code snippet below shows the sample data that's used for populating a C# list of BlogPost objects:

List<string[]> tags = new List<string[]>
{
    new[] { "iot","csharp" },
    new[] { "iot","azure","csharp" },
    new[] { "csharp","git","big data" },
    new[] { "iot","git","database" },
    new[] { "database","git" },
    new[] { "csharp","database" },
    new[] { "iot" },
    new[] { "iot","database","git" },
    new[] { "azure","database","big data","git","csharp" },
    new[] { "azure" }
};

List<BlogPost> posts = new List<BlogPost>();
int blogKey = 1;
int numberOfPosts = 20;
Random random = new Random();
for (int i = 0; i < numberOfPosts; i++)
{
    blogKey++;
    posts.Add(new BlogPost(
        blogKey,                  // Blog post ID
        string.Format(CultureInfo.InvariantCulture, "Blog Post #{0}",
            blogKey),             // Blog post title
        random.Next(100, 10000),  // Ranking score
        tags[i % tags.Count]));   // Tags--assigned from a collection
                                  // in the tags list
}

您可以在 Redis 快取中針對每個 BlogPost 物件將標記儲存成集合,並將每個集合與 BlogPost 識別碼建立關聯。You can store the tags for each BlogPost object as a set in a Redis cache and associate each set with the ID of the BlogPost. 這可讓應用程式快速尋找屬於特定部落格文章的所有標記。This enables an application to quickly find all the tags that belong to a specific blog post. 若要啟用反向搜尋,並尋找所有共用特定標記的部落格文章,您可以建立另一個集合,其中保存參考索引鍵中標記識別碼的部落格文章:To enable searching in the opposite direction and find all blog posts that share a specific tag, you can create another set that holds the blog posts referencing the tag ID in the key:

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
// Tags are easily represented as Redis Sets
foreach (BlogPost post in posts)
{
    string redisKey = string.Format(CultureInfo.InvariantCulture,
        "blog:posts:{0}:tags", post.Id);
    // Add tags to the blog post in Redis
    await cache.SetAddAsync(
        redisKey, post.Tags.Select(s => (RedisValue)s).ToArray());

    // Now do the inverse so we can figure out which blog posts have a given tag
    foreach (var tag in post.Tags)
    {
        await cache.SetAddAsync(string.Format(CultureInfo.InvariantCulture,
            "tag:{0}:blog:posts", tag), post.Id);
    }
}

這些結構可讓您以非常有效率的方式執行許多常見的查詢。These structures enable you to perform many common queries very efficiently. 例如,您可以尋找並顯示部落格文章 1 的所有標記,就像這樣:For example, you can find and display all of the tags for blog post 1 like this:

// Show the tags for blog post #1
foreach (var value in await cache.SetMembersAsync("blog:posts:1:tags"))
{
    Console.WriteLine(value);
}

您可以執行集合交集作業,尋找部落格文章 1 和部落格文章 2 常用的所有標記,如下所示:You can find all tags that are common to blog post 1 and blog post 2 by performing a set intersection operation, as follows:

// Show the tags in common for blog posts #1 and #2
foreach (var value in await cache.SetCombineAsync(SetOperation.Intersect, new RedisKey[]
    { "blog:posts:1:tags", "blog:posts:2:tags" }))
{
    Console.WriteLine(value);
}

您可以尋找包含特定標記的所有部落格文章:And you can find all blog posts that contain a specific tag:

// Show the ids of the blog posts that have the tag "iot".
foreach (var value in await cache.SetMembersAsync("tag:iot:blog:posts"))
{
    Console.WriteLine(value);
}

尋找最近存取過的項目Find recently accessed items

許多應用程式所需的常見工作是尋找最近存取過的項目。A common task required of many applications is to find the most recently accessed items. 例如,部落格網站可能會想要顯示最近讀取過部落格文章的相關資訊。For example, a blogging site might want to display information about the most recently read blog posts.

您可以使用 Redis 清單來實作這項功能。You can implement this functionality by using a Redis list. Redis 清單包含多個共用相同索引鍵的項目。A Redis list contains multiple items that share the same key. 清單會用作雙端的佇列。The list acts as a double-ended queue. 您可以使用 LPUSH (推送至左) 和 RPUSH (推送至右) 命令,將項目推送至清單兩端。You can push items to either end of the list by using the LPUSH (left push) and RPUSH (right push) commands. 您可以使用 LPOP 和 RPOP 命令,從清單的任一端擷取項目。You can retrieve items from either end of the list by using the LPOP and RPOP commands. 您也可以使用 LRANGE 和 RRANGE 命令,以傳回一組元素。You can also return a set of elements by using the LRANGE and RRANGE commands.

下列程式碼片段顯示如何使用 StackExchange 程式庫來執行這些作業。The code snippets below show how you can perform these operations by using the StackExchange library. 此程式碼使用先前範例中的 BlogPost 類型。This code uses the BlogPost type from the previous examples. 當使用者讀取部落格文章時,IDatabase.ListLeftPushAsync 方法會將部落格文章的標題推送至與 Redis 快取中索引鍵 "blog:recent_posts" 相關聯的清單。As a blog post is read by a user, the IDatabase.ListLeftPushAsync method pushes the title of the blog post onto a list that's associated with the key "blog:recent_posts" in the Redis cache.

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
string redisKey = "blog:recent_posts";
BlogPost blogPost = ...; // Reference to the blog post that has just been read
await cache.ListLeftPushAsync(
    redisKey, blogPost.Title); // Push the blog post onto the list

當讀取更多部落格文章時,其標題會推播至相同的清單。As more blog posts are read, their titles are pushed onto the same list. 清單是根據其新增的順序來排序。The list is ordered by the sequence in which the titles have been added. 最近閱讀的 blog 文章會朝向清單的左邊。The most recently read blog posts are toward the left end of the list. (如果同一篇部落格文章讀取一次以上,則它會在清單中具有多個項目)。(If the same blog post is read more than once, it will have multiple entries in the list.)

您可以使用 IDatabase.ListRange 方法,來顯示最近讀取過的文章標題。You can display the titles of the most recently read posts by using the IDatabase.ListRange method. 這個方法會採用包含清單、起點和終點的索引鍵。This method takes the key that contains the list, a starting point, and an ending point. 下列程式碼會從清單中最左端擷取 10 個部落格文章 (項目從 0 到 9) 的標題:The following code retrieves the titles of the 10 blog posts (items from 0 to 9) at the left-most end of the list:

// Show latest ten posts
foreach (string postTitle in await cache.ListRangeAsync(redisKey, 0, 9))
{
    Console.WriteLine(postTitle);
}

請注意, ListRangeAsync 方法不會從清單移除項目。Note that the ListRangeAsync method does not remove items from the list. 若要這樣做,您可以使用 IDatabase.ListLeftPopAsyncIDatabase.ListRightPopAsync 方法。To do this, you can use the IDatabase.ListLeftPopAsync and IDatabase.ListRightPopAsync methods.

若要防止清單無限成長,您可以透過修剪清單以定期挑選項目。To prevent the list from growing indefinitely, you can periodically cull items by trimming the list. 下方程式碼片段會從清單移除所有項目,但最左邊的五個項目除外:The code snippet below shows you how to remove all but the five left-most items from the list:

await cache.ListTrimAsync(redisKey, 0, 5);

實作領導者面板Implement a leader board

依預設,集合中的項目不會以任何特定順序來保留。By default, the items in a set are not held in any specific order. 您可以使用 ZADD 命令 (StackExchange 程式庫中的 IDatabase.SortedSetAdd 方法) 來建立已排序的集合。You can create an ordered set by using the ZADD command (the IDatabase.SortedSetAdd method in the StackExchange library). 系統會使用一個稱為分數的數值來排序項目,該分數是做為命令的參數而提供。The items are ordered by using a numeric value called a score, which is provided as a parameter to the command.

下列程式碼片段會將部落格文章的標題新增至已排序的清單中。The following code snippet adds the title of a blog post to an ordered list. 在這個範例中,每篇部落格文章也會有包含部落格文章排名的分數欄位。In this example, each blog post also has a score field that contains the ranking of the blog post.

ConnectionMultiplexer redisHostConnection = ...;
IDatabase cache = redisHostConnection.GetDatabase();
...
string redisKey = "blog:post_rankings";
BlogPost blogPost = ...; // Reference to a blog post that has just been rated
await cache.SortedSetAddAsync(redisKey, blogPost.Title, blogPost.Score);

您可以使用 IDatabase.SortedSetRangeByRankWithScores 方法,以遞增分數順序來擷取部落格文章的標題和分數:You can retrieve the blog post titles and scores in ascending score order by using the IDatabase.SortedSetRangeByRankWithScores method:

foreach (var post in await cache.SortedSetRangeByRankWithScoresAsync(redisKey))
{
    Console.WriteLine(post);
}

注意

StackExchange 程式庫也會提供 IDatabase.SortedSetRangeByRankAsync 方法,用來以分數順序傳回資料,但不會傳回分數。The StackExchange library also provides the IDatabase.SortedSetRangeByRankAsync method, which returns the data in score order, but does not return the scores.

您也可以使用分數遞減順序來擷取項目,並透過將額外參數提供給 IDatabase.SortedSetRangeByRankWithScoresAsync 方法來限制傳回項目的數目。You can also retrieve items in descending order of scores, and limit the number of items that are returned by providing additional parameters to the IDatabase.SortedSetRangeByRankWithScoresAsync method. 下一個範例會顯示前 10 項排名部落格文章的標題和分數:The next example displays the titles and scores of the top 10 ranked blog posts:

foreach (var post in await cache.SortedSetRangeByRankWithScoresAsync(
                               redisKey, 0, 9, Order.Descending))
{
    Console.WriteLine(post);
}

下一個範例會使用 IDatabase.SortedSetRangeByScoreWithScoresAsync 方法,可用來限制傳回給那些落在指定分數範圍的項目:The next example uses the IDatabase.SortedSetRangeByScoreWithScoresAsync method, which you can use to limit the items that are returned to those that fall within a given score range:

// Blog posts with scores between 5000 and 100000
foreach (var post in await cache.SortedSetRangeByScoreWithScoresAsync(
                               redisKey, 5000, 100000))
{
    Console.WriteLine(post);
}

使用通道傳訊Message by using channels

Redis 伺服器除了用作資料快取,可透過高效能發行者/訂閱者的機制提供傳訊功能。Apart from acting as a data cache, a Redis server provides messaging through a high-performance publisher/subscriber mechanism. 用戶端應用程式可以訂閱通道,且其他應用程式或服務可以將訊息發佈至通道。Client applications can subscribe to a channel, and other applications or services can publish messages to the channel. 訂閱應用程式接著將會收到這些訊息,以便能夠進行處理。Subscribing applications will then receive these messages and can process them.

Redis 會針對要用來訂閱通道的用戶端應用程式提供 SUBSCRIBE 命令。Redis provides the SUBSCRIBE command for client applications to use to subscribe to channels. 此命令需要一個或多個的通道名稱,其中應用程式會接受訊息。This command expects the name of one or more channels on which the application will accept messages. StackExchange 程式庫包含 ISubscription 介面,可讓 .NET Framework 應用程式訂閱和發佈至通道。The StackExchange library includes the ISubscription interface, which enables a .NET Framework application to subscribe and publish to channels.

您可以使用 GetSubscriber 方法連接到 Redis 伺服器,來建立 ISubscription 物件。You create an ISubscription object by using the GetSubscriber method of the connection to the Redis server. 然後您可以使用此物件的 SubscribeAsync 方法,來接聽通道上的訊息。Then you listen for messages on a channel by using the SubscribeAsync method of this object. 下列程式碼範例顯示如何訂閱稱為 "messages:blogPosts" 的頻道:The following code example shows how to subscribe to a channel named "messages:blogPosts":

ConnectionMultiplexer redisHostConnection = ...;
ISubscriber subscriber = redisHostConnection.GetSubscriber();
...
await subscriber.SubscribeAsync("messages:blogPosts", (channel, message) => Console.WriteLine("Title is: {0}", message));

Subscribe 方法的第一個參數即為通道的名稱。The first parameter to the Subscribe method is the name of the channel. 這個名稱會遵循快取中索引鍵所使用的相同慣例。This name follows the same conventions that are used by keys in the cache. 名稱可以包含任何二進位資料,但建議您最好使用相對較短且有意義的字串,以協助確保良好的效能和可維護性。The name can contain any binary data, although it is advisable to use relatively short, meaningful strings to help ensure good performance and maintainability.

請注意通道所使用的命名空間與索引鍵所使用的不同。Note also that the namespace used by channels is separate from that used by keys. 這表示,您可以擁有具有相同名稱的通道和索引鍵,不過這可能會讓應用程式的程式碼更難維護。This means you can have channels and keys that have the same name, although this may make your application code more difficult to maintain.

第二個參數是動作委派。The second parameter is an Action delegate. 每當新的訊息出現在通道上,這個委派就會以非同步方式執行。This delegate runs asynchronously whenever a new message appears on the channel. 這個範例僅顯示主控台上的訊息 (訊息將包含部落格文章的標題)。This example simply displays the message on the console (the message will contain the title of a blog post).

若要發佈至通道,應用程式可以使用 Redis PUBLISH 命令。To publish to a channel, an application can use the Redis PUBLISH command. StackExchange 程式庫提供 IServer.PublishAsync 方法來執行此作業。The StackExchange library provides the IServer.PublishAsync method to perform this operation. 下列程式碼片段顯示如何將訊息發佈到 "messages:blogPosts" 通道:The next code snippet shows how to publish a message to the "messages:blogPosts" channel:

ConnectionMultiplexer redisHostConnection = ...;
ISubscriber subscriber = redisHostConnection.GetSubscriber();
...
BlogPost blogPost = ...;
subscriber.PublishAsync("messages:blogPosts", blogPost.Title);

關於發佈/訂閱機制有幾個您應該了解的重點:There are several points you should understand about the publish/subscribe mechanism:

  • 多個訂用帳戶可以訂閱相同的通道,而且他們全都會收到發佈至該通道的訊息。Multiple subscribers can subscribe to the same channel, and they will all receive the messages that are published to that channel.
  • 訂閱者僅會接收訂閱後發佈的訊息。Subscribers only receive messages that have been published after they have subscribed. 通道不會進行緩衝處理,一旦發佈訊息之後,Redis 基礎結構就會將訊息推送至每個訂用帳戶並將它移除。Channels are not buffered, and once a message is published, the Redis infrastructure pushes the message to each subscriber and then removes it.
  • 依預設,訂閱者會根據傳送的順序來接收訊息。By default, messages are received by subscribers in the order in which they are sent. 在具有大量訊息和許多訂閱者和發行者的高度活躍系統中,保證循序傳遞訊息可能會降低系統效能。In a highly active system with a large number of messages and many subscribers and publishers, guaranteed sequential delivery of messages can slow performance of the system. 如果每個訊息各自獨立且順序並不重要,您就能透過 Redis 系統啟用並行處理,以協助改善回應性。If each message is independent and the order is unimportant, you can enable concurrent processing by the Redis system, which can help to improve responsiveness. 您可以針對訂用帳戶使用的連接,將 PreserveAsyncOrder 設定為 false,以便在 StackExchange 用戶端中達到此目的:You can achieve this in a StackExchange client by setting the PreserveAsyncOrder of the connection used by the subscriber to false:
ConnectionMultiplexer redisHostConnection = ...;
redisHostConnection.PreserveAsyncOrder = false;
ISubscriber subscriber = redisHostConnection.GetSubscriber();

序列化考量Serialization considerations

當您選擇序列化格式時,請考慮效能、互通性、版本控制、與現有系統的相容性、資料壓縮、記憶體額外負荷之間的取捨。When you choose a serialization format, consider tradeoffs between performance, interoperability, versioning, compatibility with existing systems, data compression, and memory overhead. 當您評估效能時,請記住基準測試與內容高度相關。When you are evaluating performance, remember that benchmarks are highly dependent on context. 它們不一定能反映實際的工作負載,且可能不會考慮較新的程式庫或版本。They may not reflect your actual workload, and may not consider newer libraries or versions. 沒有任何一個「最快速」序列化程式適用於所有的案例。There is no single "fastest" serializer for all scenarios.

能考量的選項包括:Some options to consider include:

  • Protocol Buffers (又稱為 protobuf) 是由 Google 開發的序列化格式,能有效率地將結構化的資料序列化。Protocol Buffers (also called protobuf) is a serialization format developed by Google for serializing structured data efficiently. 它會使用強型別定義檔來定義訊息結構。It uses strongly typed definition files to define message structures. 這些定義檔案接著會被編譯為語言特定的程式碼,用於將訊息序列化和還原序列化。These definition files are then compiled to language-specific code for serializing and deserializing messages. 可透過現有的 RPC 機制使用 protobuf,或是 protobuf 可以產生 RPC 服務。Protobuf can be used over existing RPC mechanisms, or it can generate an RPC service.

  • Apache Thrift 採取類似的做法,利用強型別定義檔案和編譯步驟產生序列化程式碼和 RPC 服務。Apache Thrift uses a similar approach, with strongly typed definition files and a compilation step to generate the serialization code and RPC services.

  • Apache Avro 提供和 Protocol Buffers、Thrift 類似的功能,但是沒有編譯步驟。Apache Avro provides similar functionality to Protocol Buffers and Thrift, but there is no compilation step. 而是序列化過的資料一定會包含描述結構的結構描述。Instead, serialized data always includes a schema that describes the structure.

  • JSON 是一項開放標準,使用人類看得懂的文字欄位。JSON is an open standard that uses human-readable text fields. 它有廣泛的跨平台支援。It has broad cross-platform support. JSON 不使用訊息結構描述。JSON does not use message schemas. 它是文字型格式,透過線路不是非常有效率。Being a text-based format, it is not very efficient over the wire. 不過,在某些案例中,您可能會透過 HTTP 直接將已快取的項目傳回用戶端,在此情況下,JSON 可以省下成本將一種格式還原序列化,再序列化為 JSON 的成本。In some cases, however, you may be returning cached items directly to a client via HTTP, in which case storing JSON could save the cost of deserializing from another format and then serializing to JSON.

  • BSON 是二進位序列化格式,使用類似 JSON 的結構。BSON is a binary serialization format that uses a structure similar to JSON. BSON 設計成輕量、能夠輕鬆瀏覽,而且相對於 JSON 可更快速序列化和還原序列化。BSON was designed to be lightweight, easy to scan, and fast to serialize and deserialize, relative to JSON. 承載大小可與 JSON 比較。Payloads are comparable in size to JSON. BSON 承載可能小於或大於 JSON 酬載,取決於資料。Depending on the data, a BSON payload may be smaller or larger than a JSON payload. BSON 另外有一些 JSON 沒有的資料類型,值得注意的是 BinData (用於位元組陣列) 和 Date。BSON has some additional data types that are not available in JSON, notably BinData (for byte arrays) and Date.

  • MessagePack 是二進位序列化格式,設計得更精巧以利透過網路傳輸。MessagePack is a binary serialization format that is designed to be compact for transmission over the wire. 沒有任何訊息結構描述或訊息類型檢查。There are no message schemas or message type checking.

  • Bond 是使用結構描述化資料的跨平台架構。Bond is a cross-platform framework for working with schematized data. 支援跨語言序列化和還原序列化。It supports cross-language serialization and deserialization. 此處列出與其他系統的差異之中值得注意的:繼承、類型別名和泛型的支援。Notable differences from other systems listed here are support for inheritance, type aliases, and generics.

  • gRPC 是 Google 所開發的開放原始碼 RPC 系統。gRPC is an open-source RPC system developed by Google. 根據預設,它會使用 Protocol Buffers 作為其定義語言和基礎訊息交換格式。By default, it uses Protocol Buffers as its definition language and underlying message interchange format.

在您的應用程式中實作快取時,下列模式也可能與您的案例相關:The following patterns might also be relevant to your scenario when you implement caching in your applications:

  • Cache-aside pattern (另行快取模式):這個模式描述如何從資料存放區將資料隨選載入快取中。Cache-aside pattern: This pattern describes how to load data on demand into a cache from a data store. 此模式也有助於維護快取中所保留的資料與原始資料存放區中的資料之間的一致性。This pattern also helps to maintain consistency between data that's held in the cache and the data in the original data store.

  • Sharding pattern (分區化模式) 會提供實作水平資料分割的資訊,以在儲存和存取大量資料時協助改善延展性。The Sharding pattern provides information about implementing horizontal partitioning to help improve scalability when storing and accessing large volumes of data.

詳細資訊More information