Critical Section vs. Mutex

Posted by Jeremy Cooke


Both critical sections and mutex objects can be used to synchronize code execution in a Windows CE system. Judicious use of these techniques is necessary to prevent sharing conflicts over system resources through synchronized access. But why would a programmer choose one method over the other?


A critical section is defined as a segment of code that can be run by at most one thread at any instance of time. All code that accesses a shared resource should be placed within a critical section in order to synchronize access to the resource. In effect, mutually exclusive access to the resource is guaranteed if all accesses are guarded by the same critical section.


A mutex object is ‘signaled’ when it is not owned by a thread. A thread takes ownership of the synchronizing mutex object prior to accessing a shared resource. If the resource is already in use by another thread, then access will be blocked. Exclusive access can be guaranteed provided all accesses to the resource are guarded by the same mutex object.


Now to highlight the differences: In general, mutex objects are more CPU intensive and slower to use than critical sections due to a larger amount of bookkeeping, and the deep execution path into the kernel taken to acquire a mutex. The equivalent functions for accessing a critical section remain in thread context, and consist of merely checking and incrementing/decrementing lock-count related data. This makes critical sections light weight, fast, and easy to use for thread synchronization within a process. The primary benefit of using a mutex object is that a mutex can be named and shared across process boundaries.


Best practices to optimize for performance include:

  • Minimizing the need to synchronize across process boundaries during software design
  • Minimizing the use of mutex objects whenever possible
  • Using critical sections for synchronization whenever possible


More information on critical section and mutex APIs can be found on MSDN: