Threading

This chapter is excerpted from C# 3.0 in a Nutshell, Third Edition: A Desktop Quick Reference by Joseph Albahari, Ben Albahari, published by O'Reilly Media

C# 3.0 in a Nutshell, Third Edition

Logo

Buy Now

C# allows you to execute code in parallel through multithreading.

A thread is analogous to the operating system process in which your application runs. Just as processes run in parallel on a computer, threads run in parallel within a single process. Processes are fully isolated from each other; threads have just a limited degree of isolation. In particular, threads share (heap) memory with other threads running in the same application domain. This, in part, is why threading is useful: one thread can fetch data in the background while another thread displays the data as it arrives.

This chapter describes the language and Framework features for creating, configuring, and communicating with threads, and how to coordinate their actions through locking and signaling. It also covers the predefined types that assist threading: BackgroundWorker, ReaderWriterLock, and the Timer classes.

Threading's Uses and Misuses

A common use for multithreading is to maintain a responsive user interface while a time-consuming task executes. If the time-consuming task runs on a parallel "worker" thread, the main thread is free to continue processing keyboard and mouse events.

Whether or not a user interface is involved, multithreading can be useful when awaiting a response from another computer or piece of hardware. If a worker thread performs the task, the instigator is immediately free to do other things, taking advantage of the otherwise unburdened computer.

Another use for multithreading is in writing methods that perform intensive calculations. Such methods can execute faster on a multiprocessor or multicore computer if the workload is shared among two or more threads. Asynchronous delegates are particularly well suited to this. (You can test for the number of processors via the Environment.ProcessorCount property.)

Some features of the .NET Framework implicitly create threads. If you use ASP.NET, WCF, Web Services, or Remoting, incoming client requests can arrive concurrently on the server. You may be unaware that multithreading is taking place; unless, perhaps, you use static fields to cache data without appropriate locking, running afoul of thread safety.

Threads also come with strings attached. The biggest is that multithreading can increase complexity. Having lots of threads does not in itself create complexity; it's the interaction between threads (typically via shared data) that does. This applies whether or not the interaction is intentional, and can cause long development cycles and an ongoing susceptibility to intermittent and nonreproducible bugs. For this reason, it pays to keep interaction to a minimum, and to stick to simple and proven designs wherever possible. This chapter is largely on dealing with just these complexities; remove the interaction and there's relatively little to say!

Threading also comes with a resource and CPU cost in allocating and switching threads. Multithreading will not always speed up your application-it can even slow it down if used excessively or inappropriately. For example, when heavy disk I/O is involved, it can be faster to have a couple of worker threads run tasks in sequence than to have 10 threads executing at once. (In the later section "the section called "Signaling with Wait and Pulse" we describe how to implement a producer/consumer queue, which provides just this functionality.)

Getting Started

A C# program starts in a single thread that's created automatically by the CLR and operating system (the "main" thread). Here it lives out its life as a single-threaded application, unless you do otherwise, by creating more threads (directly or indirectly).

The simplest way to create a thread is to instantiate a Thread object and to call its Start method. The constructor for Thread takes a ThreadStart delegate: a parameterless method indicating where execution should begin. Here's an example:

class ThreadTest
{
  static void Main(  )
  {
    Thread t = new Thread (WriteY);          // Kick off a new thread
    t.Start();                               // running WriteY(  )

    // Simultaneously, do something on the main thread.
    for (int i = 0; i < 1000; i+) Console.Write ("x");
  }

  static void WriteY(  )
  {
    for (int i = 0; i < 1000; i+) Console.Write ("y");
  }
}

// Output:
xxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyy
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
yyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
...

Tip

All examples assume the following namespaces are imported, unless otherwise specified:

using System; using System.Threading;

The main thread creates a new thread t on which it runs a method that repeatedly prints the character "y". Simultaneously, the main thread repeatedly prints the character "x", as shown in Figure 19-1, "Starting a new thread". On a single-processor computer, the operating system must allocate "slices" of time to each thread (typically 20 ms in Windows) to simulate concurrency, resulting in repeated blocks of "x" and "y". On a multiprocessor or multicore machine, the two threads can genuinely execute in parallel, although you still get repeated blocks of "x" and "y" because of subtleties in the mechanism by which Console handles concurrent requests.

Figure 19-1. Starting a new thread

Starting a new thread

Tip

A thread is said to be preempted at the points where its execution is interspersed with the execution of code on another thread. The term often crops up in explaining why something has gone wrong!

Once started, a thread's IsAlive property returns true, until the point where the thread ends. A thread ends when the method referenced in the Thread's constructor finishes-in this case, WriteY. Once ended, a thread cannot restart.

You can wait for another thread to end by calling its Join method. Here's an example:

static void Main(  )
{
  Thread t = new Thread (Go);
  t.Start(  );
  t.Join(  );
  Console.WriteLine ("Thread t has ended!");
}

static void Go(  ) { for (int i = 0; i < 1000; i+) Console.Write ("y"); }

This prints "y" 1,000 times, followed by "Thread t has ended!" straight afterward. You can include a timeout when calling Join, either in milliseconds or as a TimeSpan. It then returns true if the thread ended or false if it timed out.

Thread.Sleep pauses the current thread for a specified period:

Thread.Sleep (TimeSpan.FromHours (1));  // sleep for 1 hour
Thread.Sleep (500);                     // sleep for 500 milliseconds
Thread.Sleep (0);                       // relinquish CPU time-slice

Thread.Sleep(0) relinquishes the processor just long enough to allow any other active threads present in a time-slicing queue (should there be one) to be executed.

Tip

Thread.Sleep(0) is occasionally useful in production code for advanced performance tweaks. It's also an excellent diagnostic tool for helping to uncover thread safety issues: if inserting Thread.Sleep(0) anywhere in your code makes or breaks the program, you almost certainly have a bug.

Each thread has a Name property that you can set for the benefit of debugging. This is particularly useful in Microsoft Visual Studio, since the thread's name is displayed in the Debug Location toolbar. You can set a thread's name just once; attempts to change it later will throw an exception.

The static Thread.CurrentThread property allows you to refer to the currently executing thread:

Console.WriteLine (Thread.CurrentThread.Name);

Passing Data to a Thread

Let's say we want to pass an argument to the method on which a thread starts. Here's how it's done:

static void Main(  )
{
  Thread t = new Thread (Print);
  t.Start ("Hello from t!");
  Print ("Hello from the main thread!");
}

static void Print (object messageObj)
{
  string message = (string) messageObj;
  Console.WriteLine (message);
}

// Output:
Hello from t!
Hello from the main thread!

To make this possible, Thread's constructor is overloaded to accept either of two delegates:

public delegate void ThreadStart(  );
public delegate void ParameterizedThreadStart (object obj);

The limitation of ParameterizedThreadStart is that it accepts only one argument. And because it's of type object, it usually needs to be cast. An alternative is to use the parameterless ThreadStart in conjunction with an anonymous method as follows:

static void Main(  )
{
  Thread t = new Thread (delegate(  ) { Print ("Hello from t!"); });
  t.Start(  );
}
static void Print (string message) { Console.WriteLine (message); }

The advantage is that the target method (in this case, Print) can accept any number of arguments, and no casting is required. The flip side, though, is that you must keep outer-variable semantics in mind, as the following example demonstrates:

static void Main(  )
{
  string text = "t1";
  Thread t1 = new Thread (delegate(  ) { Print (text); });

  text = "t2";
  Thread t2 = new Thread (delegate(  ) { Print (text); });

  t1.Start(  );
  t2.Start(  );
}
static void Print (string message) { Console.WriteLine (message); }

// Output:
t2
t2

Sharing Data Between Threads

The preceding example demonstrated that local variables captured in an anonymous method are shared between threads. This, however, is an unusual (and generally undesirable) scenario. Let's examine what normally happens with local variables on a thread. Consider this program:

static void Main(  )
{
  new Thread (Go).Start();      // Call Go(  ) on a new thread
  Go();                         // Call Go(  ) on the main thread
}

static void Go(  )
{
  // Declare and use a local variable - 'cycles'
  for (int cycles = 0; cycles < 5; cycles+) Console.Write (cycles);
}

// OUTPUT:  0123401234

Each thread as it enters the Go method gets a separate copy of the cycles variable and so is unable to interfere with another concurrent thread. The CLR and operating system achieve this by assigning each thread its own private memory stack for local variables.

If threads do want to share data, they usually do so via a common reference. Here's an example:

class ThreadTest
{
  static void Main(  )
  {
    Introducer intro = new Introducer(  );
    intro.Message = "Hello";

    new Thread (intro.Run).Start(  );

    Console.ReadLine(  );
    Console.WriteLine (intro.Reply);
  }
}

class Introducer
{
  public string Message;
  public string Reply;

  public void Run(  )
  {
    Console.WriteLine (Message);
    Reply = "Hi right back!";
  }
}

// Output:
Hello
Hi right back!    (after pressing Enter)

This system allows both for passing data to a new thread and for receiving data back from it later on. Moreover, it's the means by which threads can communicate with each other as they're running.

Warning

Shared data is the primary cause of complexity and obscure errors in multithreading. Although often essential, it pays to keep it as simple as possible.

Fields declared as static are also shared between threads. Static fields, in fact, offer the simplest approach to sharing data, where application-wide scope is appropriate.

Thread Pooling

Whenever you start a thread, a few hundred microseconds are spent organizing such things as a fresh private local variable stack. Each thread also consumes around 1 MB of memory. The thread pool cuts these overheads by sharing and recycling threads, allowing multithreading to be applied at a more granular level without performance penalty.

The easiest way into the thread pool is by calling ThreadPool.QueueUserWorkItem instead of instantiating and starting a Thread object. Here's an example:

static void Main(  )
{
  ThreadPool.QueueUserWorkItem (Go);
  ThreadPool.QueueUserWorkItem (Go, 123);
  Console.ReadLine(  );
}

static void Go (object data)
{
  Console.WriteLine ("Hello from the thread pool! " + data);
}

// Output:
Hello from the thread pool!
Hello from the thread pool! 123

Our target method, Go, must accept a single object argument (to satisfy the WaitCallback delegate). This provides a convenient way of passing data to the method, just like with ParameterizedThreadStart.

The thread pool also keeps a lid on the total number of worker threads it will run simultaneously. Too many threads throttle the operating system with administrative burden. You can set the upper limit by calling ThreadPool.SetMaxThreads; the default is 50 (this may vary according to the hardware and operating system). Once exceeded, jobs queue up and start only when another finishes.

The thread pool makes arbitrarily concurrent applications possible, such as a web server. If you start a new thread on the server for every client request, a heavy spurt of concurrent client activity could choke the server. The thread pool addresses this problem by limiting the number of active threads. (The asynchronous method pattern takes this further by making highly efficient use of the pooled threads; see Chapter 20, Asynchronous Methods.)

You get just one thread pool per application. You can query if you're currently executing on a pooled thread via the property Thread.CurrentThread.IsThread-PoolThread.

Tip

The following automatically use the thread pool:

  • Asynchronous delegates

  • The BackgroundWorker helper class

  • System.Timers.Timer and System.Threading.Timer

  • WCF, Remoting, ASP.NET, and Web Services application servers

Optimizing the pool

The pool manager creates threads only as they're needed; this ensures that a "Hello, world" program doesn't allocate 50 threads and consume 50 MB of memory. But suppose a program rapidly enqueues 50 tasks to the pool as follows:

static void Main(  )
{
  for (int i = 0; i < 50; i+) ThreadPool.QueueUserWorkItem (Go);
}

static void Go (object notUsed)
{
  // Compute a hash on a 100,000 byte random byte sequence:
  byte[] data = new byte [100000];
  new Random(  ).NextBytes (data);
  System.Security.Cryptography.SHA1.Create(  ).ComputeHash (data);
}

The pool manager stops short of creating 50 threads. In fact, to begin with, it stops right at the number of processors or CPU cores; on a dual-core computer, the pool manager will create just two threads and queue the remaining 48 jobs to these two threads. Matching the thread count to the core count allows a program to retain a small memory footprint without hurting performance-as long as the threads are efficiently used (which in this case they are). But now suppose we insert a Thread.Sleep statement before computing the hash, idling the CPU for a while (or run a time-consuming database query). The pool manager's thread-economy strategy breaks down; it would now do better to create 50 threads, so all the jobs could sleep (or wait for the database server) simultaneously.

Fortunately, the pool manager has a backup plan. If its queue remains stationary for more than half a second, it responds by creating more threads-one every half-second-up to the capacity of the thread pool. Once created, a thread's in the pool for life, so it will always be available immediately to service new requests.

The half-second delay is a two-edged sword. On the one hand, it means that a one-off burst of brief activity (such as in our example, without the sleep) doesn't make a program suddenly consume an extra unnecessary 50 MB of memory. On the other hand, it can needlessly delay things when a pooled thread blocks, such as when querying a database or calling WebClient.DownloadFile. For this reason, you can tell the pool manager not to delay in the allocation of the first x threads, as follows:

ThreadPool.SetMinThreads (50, 50);

This makes sense on client applications that use the thread pool; if you're writing an application server using a technology such as WCF or ASP.NET, the infrastructure does this automatically.

Foreground and Background Threads

By default, threads you create explicitly are foreground threads; pooled threads are background threads. The difference is that foreground threads keep the application alive for as long as any one of them is running; background threads do not. Once all foreground threads finish, the application ends, and any background threads still running abruptly terminate.

Tip

A thread's foreground/background status has no relation to its priority or allocation of execution time.

You can query or change a thread's background status using its IsBackground property. Here's an example:

class PriorityTest
{
  static void Main (string[] args)
  {
    Thread worker = new Thread (delegate() { Console.ReadLine(  ); });
    if (args.Length > 0) worker.IsBackground = true;
    worker.Start(  );
  }
}

If this program is called with no arguments, the worker thread assumes foreground status and will wait on the ReadLine statement for the user to press Enter. Meanwhile, the main thread exits, but the application keeps running because a foreground thread is still alive.

On the other hand, if an argument is passed to Main( ), the worker is assigned background status, and the program exits almost immediately as the main thread ends (terminating the ReadLine).

When a background thread terminates in this manner, any finally blocks are circumvented. This is a problem if your program employs finally blocks (or the using keyword) to perform cleanup work such as releasing resources or deleting temporary files. To avoid this, you can explicitly wait out such background threads upon exiting an application. There are two ways to accomplish this:

  • If you've created the thread yourself, call Join on the thread.

  • If you're using the thread pool, use an event wait handle (discussed later in this chapter in the section "the section called "Signaling with Event Wait Handles").

In either case, you should specify a timeout, so you can abandon a renegade thread should it refuse to finish for some reason. This is your backup exit strategy: in the end, you want your application to close-without the user having to enlist help from the Task Manager!

Tip

If a user uses the Task Manager to forcibly end a .NET process, all threads "drop dead" as though they were background threads. This is observed rather than documented behavior, and it could vary depending on the CLR and operating system version.

Foreground threads don't require this treatment, but you must take care to avoid bugs that could cause the thread not to end. A common cause for applications failing to exit properly is the presence of active foregrounds threads.

Thread Priority

A thread's Priority property determines how much execution time it gets relative to other active threads in the same process, on the following scale:

enum ThreadPriority { Lowest, BelowNormal, Normal, AboveNormal, Highest }

This becomes relevant only when multiple threads are simultaneously active.

Elevating a thread's priority doesn't make it capable of performing real-time work, because it's still limited by the application's process priority. To perform real-time work, you must also elevate the process priority using the Process class in System.Diagnostics (we didn't tell you how to do this):

Process.GetCurrentProcess(  ).PriorityClass = ProcessPriorityClass.High;

ProcessPriorityClass.High is actually one notch short of the highest priority: Realtime. Setting a process priority to Realtime instructs the OS that you never want the process to yield CPU time to another process. If your program enters an accidental infinite loop, you might find even the operating system locked out, with nothing short of the power button left to rescue you! For this reason, High is usually the best choice for real-time applications.

Warning

If your real-time application has a user interface, elevating the process priority gives screen updates excessive CPU time, slowing down the entire computer (particularly if the UI is complex). Lowering the main thread's priority in conjunction with raising the process's priority ensures that the real-time thread doesn't get preempted by screen redraws, but doesn't solve the problem of starving other applications of CPU time, because the operating system will still allocate disproportionate resources to the process as a whole. An ideal solution is to have the real-time worker and user interface run as separate applications with different process priorities, communicating via Remoting or memory-mapped files. Memory-mapped files are ideally suited to this task; we explain how they work in "the section called "Shared Memory" in Chapter 22, Integrating with Native DLLs.

Even with an elevated process priority, there's a limit to the suitability of the managed environment in handling hard real-time requirements. In Chapter 12, Disposal and Garbage Collection, we described the issues of garbage collection and the workarounds. Further, the operating system may present additional challenges-even for unmanaged applications-that are best solved with dedicated hardware or a specialized real-time platform.

Exception Handling

Any try/catch/finally blocks in scope when a thread is created are of no relevance to the thread when it starts executing. Consider the following program:

public static void Main(  )
{
  try
  {
    new Thread (Go).Start(  );
  }
  catch (Exception ex)
  {
    // We'll never get here!
    Console.WriteLine ("Exception!");
  }
}

static void Go(  ) { throw null; }

The try/catch statement in this example is useless, and the newly created thread will be encumbered with an unhandled NullReferenceException. This behavior makes sense when you consider that each thread has an independent execution path.

The remedy is to move the exception handler into the Go method:

public static void Main(  )
{
   new Thread (Go).Start(  );
}

static void Go(  )
{
  try
  {
    ...
    throw null;      // this exception will get caught below
    ...
  }
  catch (Exception ex)
  {
    Typically log the exception, and/or signal another thread
    that we've come unstuck
    ...
  }
}

You need an exception handler on all thread entry methods in production applications-just as you do (usually at a higher level, in the execution stack) on your main thread. An unhandled exception causes the whole application to shut down. With an ugly dialog!

Warning

The "global" exception-handling event for Windows Forms applications, Application.ThreadException, works only for exceptions thrown on the main UI thread. You still must handle exceptions on worker threads manually. AppDomain.CurrentDomain.UnhandledException fires on any unhandled exception, but provides no means of preventing the application from shutting down afterward.

There are, however, two cases where you don't need to handle exceptions on a worker thread, because the .NET Framework does it for you. These are:

  • Asynchronous delegates

  • BackgroundWorker

Asynchronous Delegates

In the preceding section, we described how to pass data to a thread, using ParameterizedThreadStart and ThreadPool.QueueWorkerItem. Sometimes you need to go the other way and get return values back from a thread when it finishes executing. Asynchronous delegates offer a convenient mechanism for this, allowing any number of typed arguments to be passed in both directions. Furthermore, unhandled exceptions on asynchronous delegates are conveniently rethrown on the original thread (or more accurately, the thread that calls EndInvoke), and so they don't need explicit handling.

Asynchronous delegates always use the thread pool.

Warning

Don't confuse asynchronous delegates with asynchronous methods (methods starting with "Begin" or "End", such as File.BeginRead/File.EndRead). Asynchronous methods follow a similar protocol outwardly, but they exist to solve a much harder problem, which we describe in Chapter 20, Asynchronous Methods.

Here's how you start a worker task via an asynchronous delegate:

  1. Declare a delegate whose signature matches the method you want to run in parallel.

  2. Instantiate the delegate.

  3. Call BeginInvoke on the delegate, saving its IAsyncResult return value.

    BeginInvoke returns immediately to the caller. You can then perform other activities while the pooled thread is working. When you need its results, go to step 4.

  4. Call EndInvoke on the delegate, passing in the saved IAsyncResult object.

In the following example, we use an asynchronous delegate to execute concurrently with the main thread, a simple method that returns a string's length:

delegate int WorkInvoker (string text);

static void Main(  )
{
  WorkInvoker method = Work;
  IAsyncResult cookie = method.BeginInvoke ("test", null, null);
  //
  // ... here's where we can do other work in parallel...
  //
  int result = method.EndInvoke (cookie);
  Console.WriteLine ("String length is: " + result);
}

static int Work (string s) { return s.Length; }

EndInvoke does three things. First, it waits for the asynchronous delegate to finish executing, if it hasn't already. Second, it receives the return value (as well as any ref or out parameters). Third, it throws any unhandled worker exception back to the calling thread.

Warning

If the method you're calling with an asynchronous delegate has no return value, you are still (technically) obliged to call EndInvoke. In practice, this is open to debate; there are no EndInvoke police to administer punishment to noncompliers! If you choose not to call EndInvoke, however, you'll need to consider exception handling on the worker method to avoid silent failures.

You can also specify a callback delegate when calling BeginInvoke-a method accepting an IAsyncResult object that's automatically called upon completion. This allows the instigating thread to "forget" about the asynchronous delegate, but it requires a bit of extra work at the callback end:

static void Main(  )
{
  WorkInvoker method = Work;
  method.BeginInvoke ("test", Done, method);
  // ...
  //
}

delegate int WorkInvoker (string text);

static int Work (string s) { return s.Length; }

static void Done (IAsyncResult cookie)
{
  WorkInvoker method = (WorkInvoker) cookie.AsyncState;
  int result = method.EndInvoke (cookie);
  Console.WriteLine ("String length is: " + result);
}

The final argument to BeginInvoke is a user state object that populates the AsyncState property of IAsyncResult. It can contain anything you like; in this case, we're using it to pass the method delegate to the completion callback, so we can call EndInvoke on it.

ThreadPool.QueueUserWorkItem can provide a good alternative to asynchronous delegates used in this fashion. Asynchronous delegates have the advantage of typed method arguments; QueueUserWorkItem has the advantage of needing less plumbing code.

Synchronization

So far, we've described how to start a task on a thread, how to configure a thread, and how to pass data in both directions. We've also described how local variables are private to a thread and how references can be shared among threads allowing them to communicate via common fields.

The next step is synchronization: coordinating the actions of threads for a predictable outcome. Synchronization is particularly important when threads access the same data; it's surprisingly easy to run aground in this area.

Synchronization constructs can be divided into four categories:

  • Simple blocking methods
    These wait for another thread to finish or for a period of time to elapse. Sleep, Join, and EndInvoke are simple blocking methods.

  • Locking constructs
    These enforce exclusive access to a resource, such as a field or section of code, ensuring that only one thread can enter at a time. Locking is the primary thread-safety mechanism, allowing threads to access common data without interfering with each other. The locking constructs are lock and Mutex (and a variation called Semaphore).

  • Signaling constructs
    These allow a thread to pause until receiving a notification from another, avoiding the need for inefficient polling. There are two signaling devices: event wait handles and Monitor's Wait/Pulse methods.

  • Nonblocking synchronization constructs
    These protect access to a common field by calling upon processor primitives. The Interlocked class and the volatile keyword are the two constructs in this category.

Blocking is essential to all but the last category. Let's briefly examine this concept.

Blocking

A thread is deemed blocked when its execution is paused for some reason, such as when waiting for another to end via Join or EndInvoke. A blocked thread consumes almost no processor time; the CLR and operating system know about blocked threads, and provide appropriate support to keep them in a dormant state, waking them up when their blocking conditions are satisfied. You can test for a thread being blocked via its ThreadState property:

bool blocked = (someThread.ThreadState & ThreadState.WaitSleepJoin) != 0;

Tip

ThreadState is a flags enum, combining three "layers" of data in a bitwise fashion. Most values, however, are redundant, unused, or deprecated. The following code strips a ThreadState to one of four useful values: Unstarted, Running, WaitSleepJoin, and Stopped:

public static ThreadState SimpleThreadState (ThreadState ts) {   return ts & (ThreadState.Unstarted |                ThreadState.WaitSleepJoin |                ThreadState.Stopped); }

The ThreadState property is useful for diagnostic purposes, but unsuitable for synchronization, because a thread's state may change in between testing ThreadState and acting upon that information.

Blocking Versus Spinning

Sometimes a thread must pause until a certain condition is met. Signaling constructs achieve this efficiently by blocking while their condition is unsatisfied. However, there is a grotesque alternative: a thread can await a condition by spinning in a polling loop. For example:

while (!proceed);

or:

while (DateTime.Now < nextStartTime);

This is very wasteful on processor time: as far as the CLR and operating system are concerned, the thread is performing an important calculation, and so gets allocated resources accordingly!

Sometimes a hybrid between blocking and spinning is used as a variation:

while (!proceed) Thread.Sleep (10);    // "Spin-Sleeping!"

Although inelegant, this is far more efficient than outright spinning. Problems can arise, though, due to concurrency issues on the proceed flag. Proper use of locking and signaling avoids this.

SpinWait

Amazingly, the Thread class provides a method that does nothing other than spin! SpinWait, unlike Sleep, doesn't block or relinquish the CPU. Instead, it loops endlessly, keeping the processor "uselessly busy" for the given number of iterations. Fifty iterations might equate to a pause of around a microsecond, although this could vary depending on CPU speed and load. SpinWait is rarely used; its primary purpose is to wait on a resource or field that's expected to change extremely soon (well, inside a microsecond) without relinquishing the processor time slice. This technique is rarely used outside of the CLR and operating system.

Locking

Exclusive locking is used to ensure that only one thread can enter particular sections of code at a time. The .NET Framework provides two exclusive locking constructs: lock and Mutex. Of the two, the lock construct is faster and more convenient. Mutex, though, has a niche in that its lock can span applications in different processes on the computer.

This section focuses on the lock construct; later we show how Mutex can be used for cross-process locking. Finally, we introduce Semaphore, .NET's nonexclusive locking construct.

Let's start with the following class:

class ThreadUnsafe
{
  static int val1, val2;

  static void Go(  )
  {
    if (val2 != 0) Console.WriteLine (val1 / val2);
    val2 = 0;
  }
}

This class is not thread-safe: if Go was called by two threads simultaneously, it would be possible to get a division-by-zero error, because val2 could be set to zero in one thread right as the other thread was in between executing the if statement and Console.WriteLine.

Here's how lock can fix the problem:

class ThreadSafe
{
  static object locker = new object(  );
  static int val1, val2;

  static void Go(  )
  {
    lock (locker)
    {
      if (val2 != 0) Console.WriteLine (val1 / val2);
      val2 = 0;
    }
  }
}

Only one thread can lock the synchronizing object (in this case, locker) at a time, and any contending threads are blocked until the lock is released. If more than one thread contends the lock, they are queued on a "ready queue" and granted the lock on a first-come, first-served basis. Exclusive locks are sometimes said to enforce serialized access to whatever's protected by the lock, because one thread's access cannot overlap with that of another. In this case, we're protecting the logic inside the Go method, as well as the fields val1 and val2.

A thread blocked while awaiting a contended lock has a ThreadState of WaitSleepJoin. In the section "the section called "Interrupt and Abort"," later in this chapter, we describe how a blocked thread can be forcibly released via another thread. This is a fairly heavy-duty technique that might be used in ending a thread.

C#'s lock statement is in fact a syntactic shortcut for a call to the methods Monitor.Enter and Monitor.Exit, with a try-finally block. Here's what's actually happening within the Go method of the preceding example:

Monitor.Enter (locker);
try
{
  if (val2 != 0) Console.WriteLine (val1 / val2);
  val2 = 0;
}
finally { Monitor.Exit (locker); }

Calling Monitor.Exit without first calling Monitor.Enter on the same object throws an exception.

Monitor also provides a TryEnter method that allows a timeout to be specified, either in milliseconds or as a TimeSpan. The method then returns true if a lock was obtained, or false if no lock was obtained because the method timed out. TryEnter can also be called with no argument, which "tests" the lock, timing out immediately if the lock can't be obtained right away.

Choosing the Synchronization Object

Any object visible to each of the partaking threads can be used as a synchronizing object, subject to one hard rule: it must be a reference type. It's also highly recommended that the synchronizing object be privately scoped to the class (i.e., a private instance field) to prevent unintentional interaction from external code locking the same object. Subject to these rules, the synchronizing object can double as the object it's protecting, such as with the list field in the following example:

class ThreadSafe
{
  List <string> list = new List <string>(  );

  void Test(  )
  {
    lock (list)
    {
      list.Add ("Item 1");
      ...

A dedicated field (such as locker, in the example prior) allows precise control over the scope and granularity of the lock. The containing object (this)-or even its type-can also be used as a synchronization object:

lock (this) { ... }

or:

lock (typeof (Widget)) { ... }    // For protecting access to statics

Both are discouraged, however, because they offer excessive scope to the synchronization object. Code in other places may lock on that same instance (or type) with an unpredictable outcome. A lock on a type may even seep through application domain boundaries!

Tip

Locking doesn't restrict access to the synchronizing object itself in any way. In other words, x.ToString( ) will not block because another thread has called lock(x); both threads must call lock(x) in order for blocking to occur.

Nested Locking

A thread can repeatedly lock the same object, via multiple calls to Monitor.Enter, or nested lock statements. The object is subsequently unlocked when a corresponding number of Monitor.Exit statements have executed or when the outermost lock statement has exited. This allows for the most natural semantics when one method calls another as follows:

static object x = new object(  );

static void Main(  )
{
  lock (x)
  {
     Console.WriteLine ("I have the lock");
     Nest(  );
     Console.WriteLine ("I still have the lock");
  }
  // Now the lock is released.
}

static void Nest(  )
{
  lock (x) { }
  // We still have the lock on x!
}

A thread can block on only the first, or outermost, lock.

When to Lock

As a basic rule, you should lock before accessing any field comprising writable shared state. Even in the simplest case-an assignment operation on a single field-you must consider synchronization. In the following class, neither the Increment nor the Assign method is thread-safe:

class ThreadUnsafe
{
  static int x;
  static void Increment(  ) { x++; }
  static void Assign(  )    { x = 123; }
}

Here are thread-safe versions of Increment and Assign:

class ThreadSafe
{
  static object locker = new object(  );
  static int x;

  static void Increment(  ) { lock (locker) x++; }
  static void Assign(  )    { lock (locker) x = 123; }
}

In the section "the section called "Nonblocking Synchronization"," later in this chapter, we explain how this need arises, and how the volatile and Interlocked constructs can provide an alternative to locking in these simple situations.

Locking and Atomicity

If a group of variables are always read and written within the same lock, you can say the variables are read and written atomically. Let's suppose fields x and y are always read and assigned within a lock on object locker:

lock (locker) { if (x != 0) y /= x; }

One can say x and y are accessed atomically, because the code block cannot be divided or preempted by the actions of another thread in such a way that will change x or y and invalidate its outcome. You'll never get a division-by-zero error, providing x and y are always accessed within this same exclusive lock.

Instruction atomicity is a different, although analogous concept: an instruction is atomic if it executes indivisibly on the underlying processor (see the later section "the section called "Nonblocking Synchronization").

Performance, Races, and Deadlocks

Locking is fast: you can expect to acquire and release a lock in less than 100 nanoseconds on a 3 GHz computer if the lock is uncontended. If it is contended, the consequential blocking and task-switching move the overhead closer to the microsecond region, although it may be longer before the thread is actually rescheduled. This, in turn, is dwarfed by the hours of overhead-or overtime-that can result from not locking when you should have!

If used improperly, locking can have adverse effects: impoverished concurrency, deadlocks, and lock races. Impoverished concurrency occurs when too much code is placed in a lock statement, causing other threads to block unnecessarily. A deadlock is when two threads each wait for a lock held by the other, so neither can proceed. A lock race happens when it's possible for either of two threads to obtain a lock first; the program breaks if the "wrong" thread wins.

Deadlocks are most commonly a syndrome of too many synchronizing objects. A good rule is to start on the side of having fewer synchronizing objects, increasing the locking granularity only when a plausible need arises. Locking objects in a consistent order, where possible, also alleviates deadlocking.

Warning

The CLR, in a standard hosting environment, is not like SQL Server and does not automatically detect and resolve deadlocks by terminating one of the offenders. A threading deadlock causes participating threads to block indefinitely, unless you've specified a locking timeout. (Under the SQL CLR integration host, however, deadlocks are automatically detected and a [catchable] exception is thrown on one of the threads).

An excellent article describing the intricacies of deadlocking is available at https://research.microsoft.com/∼birrell/papers/threadscsharp.pdf.

Mutex

A Mutex is like a C# lock, but it can work across multiple processes. In other words, Mutex can be computer-wide as well as application-wide.

Tip

Acquiring and releasing an uncontended Mutex takes a few microseconds; about 50 times slower than a lock.

With a Mutex class, you call the WaitOne method to lock and ReleaseMutex to unlock. Just as with the lock statement, a Mutex can be released only from the same thread that obtained it.

A common use for a cross-process Mutex is to ensure that only one instance of a program can run at a time. Here's how it's done:

class OneAtATimePlease
{
  // Naming a Mutex makes it available computer-wide. Use a name that's
  // unique to your company and application (e.g., include your URL).

  static Mutex mutex = new Mutex (false, "oreilly.com OneAtATimeDemo");

  static void Main(  )
  {
    // Wait a few seconds if contended, in case another instance
    // of the program is still in the process of shutting down.

    if (!mutex.WaitOne (TimeSpan.FromSeconds (3), false))
    {
      Console.WriteLine ("Another instance of the app is running. Bye!");
      return;
    }
    try
    {
      Console.WriteLine ("Running. Press Enter to exit");
      Console.ReadLine(  );
    }
    finally { mutex.ReleaseMutex(  ); }
  }
}

A good feature of Mutex is that if the application terminates without ReleaseMutex being called, the CLR releases the Mutex automatically.

Semaphore

A Semaphore is like a nightclub: it has a certain capacity, enforced by a bouncer. Once it's full, no more people can enter and a queue builds up outside. Then, for each person that leaves, one person enters from the head of the queue. The constructor requires a minimum of two arguments: the number of places currently available in the nightclub and the club's total capacity.

A Semaphore with a capacity of one is similar to a Mutex or lock, except that the Semaphore has no "owner"-it's thread-agnostic. Any thread can call Release on a Semaphore, whereas with Mutex and lock, only the thread that obtained the lock can release it.

Semaphores can be useful in limiting concurrency-preventing too many threads from executing a particular piece of code at once. In the following example, five threads try to enter a nightclub that allows only three threads in at once:

class TheClub      // No door lists!
{
  static Semaphore s = new Semaphore (3, 3);   // Available=3; Capacity=3

  static void Main(  )
  {
    for (int i = 1; i <= 5; i+) new Thread (Enter).Start (i);
  }

  static void Enter (object id)
  {
    Console.WriteLine (id + " wants to enter");s.WaitOne(  );
    Console.WriteLine (id + " is in!");           // Only three threads
    Thread.Sleep (1000 * (int) id);               // can be here at
    Console.WriteLine (id + " is leaving");       // a time.
    s.Release(  );
  }
}

1 wants to enter
1 is in!
2 wants to enter
2 is in!
3 wants to enter
3 is in!
4 wants to enter
5 wants to enter
1 is leaving
4 is in!
2 is leaving
5 is in!

If the Sleep statement was instead performing intensive disk I/O, the Semaphore would improve overall performance by limiting excessive concurrent hard-drive activity.

A Semaphore, if named, can span processes in the same way as a Mutex.

Thread Safety

A program or method is thread-safe if it has no indeterminacy in the face of any multithreading scenario. Thread safety is achieved primarily with locking and by reducing the possibilities for thread interaction.

General-purpose types are rarely thread-safe in their entirety, for the following reasons:

  • The development burden in full thread safety can be significant, particularly if a type has many fields (each field is a potential for interaction in an arbitrarily multithreaded context).

  • Thread safety can entail a performance cost (payable, in part, whether or not the type is actually used by multiple threads).

  • A thread-safe type does not necessarily make the program using it thread-safe, and sometimes the work involved in the latter can make the former redundant.

Thread safety is hence usually implemented just where it needs to be, in order to handle a specific multithreading scenario.

There are, however, a few ways to "cheat" and have large and complex classes run safely in a multithreaded environment. One is to sacrifice granularity by wrapping large sections of code-even access to an entire object-around a single exclusive lock, enforcing serialized access at a high level. This tactic is, in fact, essential if you want to use thread-unsafe third-party code (or most Framework types, for that matter) in a multithreaded context. The trick is simply to use the same exclusive lock to protect access to all properties, methods, and fields on the thread-unsafe object. The solution works well if the object's methods all execute quickly (otherwise, there will be a lot of blocking).

Warning

Primitive types aside, few .NET Framework types, when instantiated, are thread-safe for anything more than concurrent read-only access. The onus is on the developer to superimpose thread safety, typically with exclusive locks.

Another way to cheat is to minimize thread interaction by minimizing shared data. This is an excellent approach and is used implicitly in "stateless" middle-tier application and web page servers. Since multiple client requests can arrive simultaneously, the server methods they call must be thread-safe. A stateless design (popular for reasons of scalability) intrinsically limits the possibility of interaction, since classes do not persist data between requests. Thread interaction is then limited just to static fields one may choose to create, for such purposes as caching commonly used data in memory and in providing infrastructure services such as authentication and auditing.

The final approach in implementing thread safety is to use an automatic locking regime. The .NET Framework does exactly this, if you subclass ContextBoundObject and apply the Synchronization attribute to the class. Whenever a method or property on such an object is then called, an object-wide lock is automatically taken for the whole execution of the method or property. Although this reduces the thread-safety burden, it creates problems of its own: deadlocks that would not otherwise occur, impoverished concurrency, and unintended reentrancy. For these reasons, manual locking is generally a better option-at least until a less simplistic automatic locking regime becomes available.

Thread Safety and .NET Framework Types

Locking can be used to convert thread-unsafe code into thread-safe code. A good application of this is the .NET Framework: nearly all of its nonprimitive types are not thread-safe when instantiated, and yet they can be used in multithreaded code if all access to any given object is protected via a lock. Here's an example, where two threads simultaneously add items to the same List collection, then enumerate the list:

class ThreadSafe
{
  static List <string> list = new List <string>(  );

  static void Main(  )
  {
    new Thread (AddItems).Start(  );
    new Thread (AddItems).Start(  );
  }

  static void AddItems(  )
  {
    for (int i = 0; i < 100; i+)
      lock (list)
        list.Add ("Item " + list.Count);

    string[] items;
    lock (list) items = list.ToArray(  );
    foreach (string s in items) Console.WriteLine (s);
  }
}

In this case, we're locking on the list object itself. If we had two interrelated lists, we would have to choose a common object upon which to lock (we could nominate one of the lists, or use an independent field).

Enumerating .NET collections is also thread-unsafe in the sense that an exception is thrown if another thread alters the list during enumeration. Rather than locking for the duration of enumeration, in this example, we first copy the items to an array. This avoids holding the lock excessively if what we're doing during enumeration is potentially time-consuming. (Another solution is to use a reader/writer lock; see the section "the section called "ReaderWriterLockSlim" later in this chapter.)

Locking around thread-safe objects

Sometimes you also need to lock around accessing thread-safe objects. To illustrate, imagine that the Framework's List class was, indeed, thread-safe, and we want to add an item to a list:

if (!myList.Contains (newItem)) myList.Add (newItem);

Whether or not the list was thread-safe, this statement is certainly not! The whole if statement would have to be wrapped in a lock in order to prevent preemption in between testing for containership and adding the new item. This same lock would then need to be used everywhere we modified that list. For instance, the following statement would also need to be wrapped, in the identical lock:

myList.Clear(  );

to ensure that it did not preempt the former statement. In other words, we would have to lock exactly as with our thread-unsafe collection classes (making the List class's hypothetical thread safety redundant).

Static methods

Wrapping access to an object around a custom lock works only if all concurrent threads are aware of-and use-the lock. This may not be the case if the object is widely scoped. The worst case is with static members in a public type. For instance, imagine if the static property on the DateTime struct, DateTime.Now, was not thread-safe, and that two concurrent calls could result in garbled output or an exception. The only way to remedy this with external locking might be to lock the type itself-lock(typeof(DateTime))-before calling DateTime.Now. This would work only if all programmers agreed to do this (which is unlikely). Furthermore, locking a type creates problems of its own.

For this reason, static members on the DateTime struct are guaranteed to be thread-safe. This is a common pattern throughout the .NET Framework: static members are thread-safe; instance members are not. Following this pattern also makes sense when writing custom types, so as not to create impossible thread-safety conundrums.

Tip

When writing components for public consumption, a good policy is to program at least such as not to preclude thread safety. This means being particularly careful with static members, whether used internally or exposed publicly, and considering more granular thread safety if you have long-running methods.

Thread Safety in Application Servers

Application servers need to be multithreaded to handle simultaneous client requests. WCF, ASP.NET, and Web Services applications are implicitly multithreaded; the same holds true for Remoting server applications that use a network channel such as TCP or HTTP. This means that when writing code on the server side, you must consider thread safety if there's any possibility of interaction among the threads processing client requests. Fortunately, such a possibility is rare; a typical server class either is stateless (no fields) or has an activation model that creates a separate object instance for each client or each request. Interaction only usually arises through static fields, sometimes used for caching in memory parts of a database to improve performance.

For example, suppose you have a RetrieveUser method that queries a database:

// User is a custom class with fields for user data
internal User RetrieveUser (int id) { ... }

If this method was called frequently, you could improve performance by caching the results in a static Dictionary. Here's a solution that takes thread safety into account:

static class UserCache
{
  static Dictionary <int, User> _users = new Dictionary <int, User>(  );

  internal static User GetUser (int id)
  {
    User u = null;

    lock (_users)
      if (_users.TryGetValue (id, out u))
        return u;

    u = RetrieveUser (id);           // Method to retrieve from database;
    lock (_users) _users [id] = u;
    return u;
  }
}

We must, at a minimum, lock around reading and updating the dictionary to ensure thread safety. In this example, we choose a practical compromise between simplicity and performance in locking. Our design actually creates a very small potential for inefficiency: if two threads simultaneously called this method with the same previously unretrieved id, the RetrieveUser method would be called twice-and the dictionary would be updated unnecessarily. Locking once across the whole method would prevent this, but would create a worse inefficiency: the entire cache would be locked up for the duration of calling RetrieveUser, during which time other threads would be blocked in retrieving any user.

Thread Safety in Rich Client Applications

Both the Windows Forms and Windows Presentation Foundation (WPF) libraries have special threading models. Although each has a separate implementation, they are both very similar in how they function.

The objects that make up a rich client are primarily based on Control in the case of Windows Forms or DependencyObject in the case of WPF. None of these objects is thread-safe, and so cannot be safely accessed from two threads at once. To ensure that you obey this, WPF and Windows Forms have models whereby only the thread that instantiates a UI object can call any of its members. Violate this and an exception is thrown.

On the positive side, this means you don't need to lock around accessing a UI object. On the negative side, if you want to call a member on object X created on another thread Y, you must marshal the request to thread Y. You can do this explicitly as follows:

  • In Windows Forms, call Invoke or BeginInvoke on the control.

  • In WPF, call Invoke or BeginInvoke on the element's Dispatcher object.

Invoke and BeginInvoke both accept a delegate, which references the method on the target control that you want to run. Invoke works synchronously: the caller blocks until the marshal is complete. BeginInvoke works asynchronously: the caller returns immediately and the marshaled request is queued up (using the same message queue that handles keyboard, mouse, and timer events).

Tip

BackgroundWorker allows you to avoid explicitly marshaling with Invoke and BeginInvoke. We describe this later in this chapter, in the section "the section called "BackgroundWorker"."

It's helpful to think of a rich client application as having two distinct categories of threads: UI threads and worker threads. UI threads instantiate (and subsequently "own") UI elements; worker threads do not. Worker threads typically execute long-running tasks such as fetching data.

Most rich client applications have a single UI thread (which is also the main application thread) and periodically spawn worker threads-either directly or using BackgroundWorker. These workers then marshal back to the main UI thread in order to update controls or report on progress.

So, when would an application have multiple UI threads? The main scenario is when you have an application with multiple top-level windows, often called a Single Document Interface (SDI) application, such as Microsoft Word. Each SDI window typically shows itself as a separate "application" on the taskbar and is mostly isolated, functionally, from other SDI windows. By giving each such window its own UI thread, the application can be made more responsive.

Nonblocking Synchronization

Earlier, we said that the need for synchronization arises even in the simple case of assigning or incrementing a field. Although locking can always satisfy this need, a contended lock means that a thread must block, suffering the overhead and latency of being temporarily descheduled. The .NET Framework's nonblocking synchronization constructs can perform simple operations without ever blocking, pausing, or waiting. These involve using instructions that are strictly atomic or instructing the compiler to use "volatile" read and write semantics.

The nonblocking constructs are also simpler to use-in some situations-than locks.

Atomicity and Interlocked

A statement is intrinsically atomic if it executes as a single indivisible instruction on the underlying processor. Strict atomicity precludes any possibility of preemption. In C#, a simple read or assignment on a field of 32 bits or less is atomic on a 32-bit processor. (An Intel Core 2 or Pentium D processor with 64-bit addressing extensions is still essentially 32-bit.) Operations on fields larger than the width of the processor are nonatomic, as are statements that combine more than one read/write operation:

class Atomicity        // This assumes we're running on a 32-bit CPU.
{
  static int x, y;
  static long z;

  static void Test(  )
  {
    long myLocal;
    x = 3;             // Atomic
    z = 3;             // Nonatomic (z is 64 bits)
    myLocal = z;       // Nonatomic (z is 64 bits)
    y += x;            // Nonatomic (read AND write operation)
    x++;               // Nonatomic (read AND write operation)
  }
}

Reading and writing 64-bit fields is nonatomic on 32-bit CPUs because it requires two separate instructions; one for each 32-bit memory location. So, if thread A reads a 64-bit value while thread B is updating it, thread A may end up with a bitwise combination of the old and new values.

Unary operators of the kind x++ are implemented by reading a variable, processing it, and then writing it back. Consider the following class:

class ThreadUnsafe
{
  static int x = 1000;
  static void Go(  ) { for (int i = 0; i < 100; i+) x--; }
}

You might expect that if 10 threads concurrently run Go, x would end up as 0. However, this is not guaranteed, because it's possible for one thread to preempt another in between retrieving x's current value, decrementing it, and writing it back (resulting in an out-of-date value being written).

One way to address these issues is to wrap the nonatomic operations in a lock statement. Locking, in fact, simulates atomicity if consistently applied. The Interlocked class, however, provides an easier and faster solution for such simple operations:

class Program
{
  static long sum;

  static void Main(  )
  {                                                               // sum
    // Simple increment/decrement operations:
    Interlocked.Increment (ref sum);                              // 1
    Interlocked.Decrement (ref sum);                              // 0
    // Add/subtract a value:
    Interlocked.Add (ref sum, 3);                                 // 3

    // Read a 64-bit field:
    Console.WriteLine (Interlocked.Read (ref sum));               // 3

    // Write a 64-bit field while reading previous value:
    // (This prints "3" while updating sum to 10)
    Console.WriteLine (Interlocked.Exchange (ref sum, 10));       // 10

    // Update a field only if it matches a certain value (10):
    Interlocked.CompareExchange (ref sum, 123, 10);               // 123
  }
}

Interlocked works by making its need for atomicity known to the operating system and virtual machine. Using Interlocked is generally more efficient than obtaining a lock, because it can never block and suffer the overhead of its thread being temporarily descheduled.

Interlocked is also valid across multiple processes, in contrast to the lock statement, which is effective only across threads in the current process. An example of where this might be useful is in reading and writing process-shared memory.

Memory Barriers and Volatility

Consider this class:

class Unsafe
{
  static bool endIsNigh, repented;

  static void Main(  )
  {
    new Thread (Wait).Start(  );        // Start up the spinning waiter
    Thread.Sleep (1000);              // Give it a second to warm up!

    repented = true;
    endIsNigh = true;
  }

  static void Wait(  )
  {
    while (!endIsNigh);               // Spin until endIsNigh
    Console.Write (repented);
  }
}

Is it possible for the Wait method to write "False"?

The answer is yes, on a multicore or multiprocessor machine. The repented and endIsNigh fields can be cached in CPU registers to improve performance, meaning a delay before their updated values are written back to memory. And when the CPU registers are written back to memory, it's not necessarily in the order they were originally updated.

The static methods Thread.VolatileRead and Thread.VolatileWrite circumvent this caching. VolatileRead means "read the latest value"; VolatileWrite means "write immediately to memory." You can achieve the same thing more elegantly by declaring the field with the volatile modifier:

class ThreadSafe
{
  // Always use volatile read/write semantics:
  volatile static bool endIsNigh, repented;
  ...

Tip

If the volatile keyword is used in preference to the VolatileRead and VolatileWrite methods, one can think in the simplest terms- that is, "never thread-cache this field!"

If access to repented and endIsNigh is wrapped in a lock statement, volatile read and write semantics are applied automatically, and the volatile keyword is unnecessary. This is because an (intended) side effect of locking is to create a memory barrier: a guarantee that the volatility of fields used within the lock statement will not extend outside the lock statement's scope. In other words, the fields will be fresh on entering the lock (volatile read) and be written to memory before exiting the lock (volatile write). Locking makes volatile redundant.

A lock statement has further advantages. In this case, it would allow us to access the fields repented and endIsNigh as a single atomic unit so that we could safely run something like this:

object locker = new object(  );
...
lock (locker) { if (endIsNigh) repented = true; }

A lock is also preferable when a field is used many times in a loop (assuming the lock is held for the duration of the loop). Although a volatile read or write beats a lock in performance, a thousand volatile read/writes are unlikely to beat one lock!

Volatility applies to reference types, primitive integral types, and unsafe pointer types. Other value types such as DateTime cannot be cached in CPU registers and so need not (and cannot) be declared with the volatile keyword. Volatile read and write semantics are also unnecessary when fields are accessed via the Interlocked class.

Signaling with Event Wait Handles

Event wait handles are used for signaling. Signaling is when one thread waits until it receives notification from another. Event wait handles are the simplest of the signaling constructs, and they are unrelated to C# events. They come in two flavors, AutoResetEvent and ManualResetEvent. Both are based on the common EventWaitHandle class, where they derive all their functionality.

An AutoResetEvent is much like a ticket turnstile: inserting a ticket lets exactly one person through. The "auto" in the class's name refers to the fact that an open turnstile automatically closes or "resets" after someone steps through. A thread waits, or blocks, at the turnstile by calling WaitOne (wait at this "one" turnstile until it opens), and a ticket is inserted by calling the Set method. If a number of threads call WaitOne, a queue builds up behind the turnstile. A ticket can come from any thread; in other words, any (unblocked) thread with access to the AutoResetEvent object can call Set on it to release one blocked thread.

In the following example, a thread is started whose job is simply to wait until signaled by another thread (see Figure 19-2, "Signaling with an EventWaitHandle"):

class BasicWaitHandle
{
  static EventWaitHandle wh = new AutoResetEvent (false);

  static void Main(  )
  {
    new Thread (Waiter).Start(  );
    Thread.Sleep (1000);                  // Pause for a second...
    wh.Set(  );                             // Wake up the Waiter.
  }

  static void Waiter(  )
  {
    Console.WriteLine ("Waiting...");
    wh.WaitOne(  );                        // Wait for notification
    Console.WriteLine ("Notified");
  }
}

// Output:
Waiting...(pause) Notified.

Figure 19-2. Signaling with an EventWaitHandle

Signaling with an EventWaitHandle

If Set is called when no thread is waiting, the handle stays open for as long as it takes until some thread calls WaitOne. This behavior helps avoid a race between a thread heading for the turnstile, and a thread inserting a ticket ("Oops, inserted the ticket a microsecond too soon, bad luck, now you'll have to wait indefinitely!"). However, calling Set repeatedly on a turnstile at which no one is waiting doesn't allow a whole party through when they arrive: only the next single person is let through and the extra tickets are "wasted."

A ManualResetEvent functions like an ordinary gate. Calling Set opens the gate, allowing any number of threads calling WaitOne to be let through. Calling Reset closes the gate. Threads that call WaitOne on a closed gate will block; when the gate is next opened, they will be released all at once.

The Reset method also works on an AutoResetEvent. Its effect is then to close the turnstile (should it be open) without waiting or blocking.

WaitOne accepts an optional timeout parameter, returning false if the wait ended because of a timeout rather than obtaining the signal. WaitOne can also be instructed to exit the current synchronization context for the duration of the wait (if an automatic locking regime is in use) in order to prevent excessive blocking.

Tip

Calling WaitOne with a timeout of zero tests whether a wait handle is "open," without blocking the caller.

Creating and Disposing Wait Handles

Event wait handles can be created in one of two ways. The first is via their constructors:

EventWaitHandle auto = new AutoResetEvent (false);
EventWaitHandle manual = new ManualResetEvent (false);

If the boolean argument is true, the handle's Set method is called automatically, immediately after construction. The other method of instantiation is via the base class, EventWaitHandle:

var auto = new EventWaitHandle (false, EventResetMode.AutoReset);
var manual = new EventWaitHandle (false, EventResetMode.ManualReset);

Once you've finished with a wait handle, you can call its Close method to release the operating system resource. Alternatively, you can simply drop all references to the wait handle and allow the garbage collector to do the job for you sometime later (wait handles implement the disposal pattern whereby the finalizer calls Close). This practice is (arguably) acceptable with wait handles because they have a light OS burden (asynchronous delegates rely on exactly this mechanism to release their IAsyncResult's wait handle).

Wait handles are released automatically when an application domain unloads.

Two-Way Signaling

Let's say we want the main thread to signal a worker thread three times in a row. If the main thread simply calls Set on a wait handle several times in rapid succession, the second or third signal may get lost, since the worker may take time to process each signal.

The solution is for the main thread to wait until the worker's ready before signaling it. This can be done with another AutoResetEvent, as follows:

class TwoWaySignaling
{
  static EventWaitHandle ready = new AutoResetEvent (false);
  static EventWaitHandle go = new AutoResetEvent (false);
  static volatile string message;         // We must either use volatile
                                          // or lock around this field
  static void Main(  )
  {
    new Thread (Work).Start(  );

    ready.WaitOne(  );            // First wait until worker is ready
    message = "ooo";
    go.Set(  );                   // Tell worker to go!

    ready.WaitOne(  );
    message = "ahhh";           // Give the worker another message
    go.Set(  );

    ready.WaitOne(  );
    message = null;             // Signal the worker to exit
    go.Set(  );
  }

  static void Work(  )
  {
    while (true)
    {
      ready.Set(  );                          // Indicate that we're ready
      go.WaitOne(  );                         // Wait to be kicked off...
      if (message == null) return;          // Gracefully exit
      Console.WriteLine (message);
    }
  }
}

// Output:
ooo
ahhh

Figure 19-3, "Two-way signaling" shows this process visually.

Figure 19-3. Two-way signaling

Two-way signaling

Here, we're using a null message to indicate that the worker should end. With threads that run indefinitely, it's important to have an exit strategy!

Creating a Cross-Process EventWaitHandle

EventWaitHandle's constructor allows a "named" EventWaitHandle to be created, capable of operating across multiple processes. The name is simply a string, and it can be any value that doesn't unintentionally conflict with someone else's name! If the name is already in use on the computer, you get a reference to the same underlying EventWaitHandle; otherwise, the operating system creates a new one. Here's an example:

EventWaitHandle wh = new EventWaitHandle (false, EventResetMode.AutoReset,
                                          "MyCompany.MyApp.SomeName");

If two applications each ran this code, they would be able to signal each other: the wait handle would work across all threads in both processes.

Pooling Wait Handles

If your application has lots of threads that spend most of their time blocked on a wait handle, you can reduce the resource burden via the thread pool. The thread pool economizes by coalescing many wait handles onto a few threads.

To use the thread pool, register your wait handle along with a delegate to be executed when the wait handle is signaled. Do this by calling ThreadPool.RegisterWaitForSingleObject, as in this example:

class Test
{
  static ManualResetEvent starter = new ManualResetEvent (false);

  public static void Main(  )
  {
    ThreadPool.RegisterWaitForSingleObject (starter, Go, "hello", −1, true);
    Thread.Sleep (5000);
    Console.WriteLine ("Signaling worker...");
    starter.Set(  );
    Console.ReadLine(  );
  }

  public static void Go (object data, bool timedOut)
  {
    Console.WriteLine ("Started " + data);
    // Perform task...
  }
}

// Output:
(5 second delay)
Signaling worker...
Started hello

In addition to the wait handle and delegate, RegisterWaitForSingleObject accepts a "black box" object that it passes to your delegate method (rather like ParameterizedThreadStart), as well as a timeout in milliseconds (-1 meaning no timeout) and a boolean flag indicating whether the request is one-off rather than recurring.

RegisterWaitForSingleObject is particularly valuable in an application server that must handle many concurrent requests. Suppose you need to block on a ManualResetEvent and simply call WaitOne:

void AppServerMethod(  )
{
  wh.WaitOne(  );
  // ... continue execution
}

If 100 clients called this method, 100 server threads would be tied up for the duration of the blockage. Replacing wh.WaitOne with RegisterWaitForSingleObject allows the method to return immediately, wasting no threads:

void AppServerMethod
{
  ThreadPool.RegisterWaitForSingleObject (wh, Resume, null, -1, true);
}

static void Resume (object data, bool timedOut)
{
  // ... continue execution
}

The data object passed to Resume allows continuance of any transient data.

WaitAny, WaitAll, and SignalAndWait

In addition to the Set, WaitOne, and Reset methods, there are static methods on the WaitHandle class to crack more complex synchronization nuts. The WaitAny, WaitAll, and SignalAndWait methods wait across multiple handles. The wait handles can be of differing types, and they include Mutex and Semphore objects, since these also derive from the abstract WaitHandle class.

SignalAndWait is perhaps the most useful: it calls WaitOne on one WaitHandle, while calling Set on another WaitHandle, in an atomic operation. You can use this method on a pair of EventWaitHandles to set up two threads to rendezvous or "meet" at the same point in time, in a textbook fashion. Either AutoResetEvent or ManualResetEvent will do the trick. The first thread does the following:

WaitHandle.SignalAndWait (wh1, wh2);

whereas the second thread does the opposite:

WaitHandle.SignalAndWait (wh2, wh1);

WaitHandle.WaitAny waits for any one of an array of wait handles; WaitHandle.WaitAll waits on all of the given handles. WaitAll is of dubious value because of a weird connection to apartment threading-a throwback to the legacy COM architecture. WaitAll requires that the caller be in a multithreaded apartment, the model least suitable for interoperability. The main thread of a Windows application, for example, is unable to interact with the clipboard in this mode. Fortunately, the .NET Framework provides another signaling mechanism that one can use when wait handles are awkward or unsuitable: Wait and Pulse.

Signaling with Wait and Pulse

The Monitor class provides another signaling construct, via two static methods: Wait and Pulse. The principle is that you write the signaling logic yourself using custom flags and fields (enclosed in lock statements), and then introduce Wait and Pulse commands to mitigate CPU spinning. The advantage of this low-level approach is that with just Wait, Pulse, and the lock statement, you can achieve the functionality of AutoResetEvent, ManualResetEvent, and Semaphore, as well as WaitHandle's static methods WaitAll and WaitAny. Furthermore, Wait and Pulse can be amenable in situations where all of the wait handles are parsimoniously challenged.

Wait and Pulse signaling, however, has a number of disadvantages over event wait handles:

  • Wait/Pulse cannot span application domains or processes on a computer.

  • Wait/Pulse cannot be used in the asynchronous method pattern (see Chapter 20, Asynchronous Methods) because the thread pool offers Monitor.Wait no equivalent of RegisterWaitForSingleObject, so a blocked Wait cannot avoid monopolizing a thread.

  • You must remember to protect all variables related to the signaling logic with locks.

  • Wait/Pulse programs may confuse developers relying on the MSDN for documentation.

The documentation problem arises because it's not obvious how Wait and Pulse are supposed to be used, even when you've read up on how they work. Wait and Pulse also have a peculiar aversion to dabblers: they will seek out any holes in your understanding and then delight in tormenting you! Fortunately, there is a simple pattern of use that tames Wait and Pulse.

In terms of performance, Wait and Pulse are faster than an event wait handle if you expect the waiter not to block. Otherwise, they are similar, each with an overhead in the few-microseconds region.

How to Use Wait and Pulse

Here's how you use Wait and Pulse:

  1. Define a single field for use as the synchronization object, such as:

    object locker = new object(  );
    
  2. Define field(s) for use in your custom blocking condition(s). For example:

    bool go; or int semaphoreCount;
    
  3. Whenever you want to block, include the following code:

    lock (locker)
      while (<blocking-condition> )
        Monitor.Wait (locker);
    
  4. Whenever you change (or potentially change) a blocking condition, include this code:

    lock (locker)
    {< alter the field(s) or data that might
        impact the blocking condition(s) >
      Monitor.PulseAll (locker);
    }
    

    (If you change a blocking condition and want to block, you can incorporate steps 3 and 4 in a single lock statement.)

This pattern allows any thread to wait at any time for any condition. Here's a simple example, where a worker thread waits until the go field is set to true:

class SimpleWaitPulse
{
  static object locker = new object(  );
  static bool go;

  static void Main(  )
  {                                // The new thread will block
    new Thread (Work).Start(  );     // because go==false.

    Console.ReadLine(  );            // Wait for user to hit Enter

    lock (locker)                  // Let's now wake up the thread by
    {                              // setting go=true and pulsing.
      go = true;Monitor.PulseAll (locker);
    }
  }

  static void Work(  )
  {
    lock (locker)
      while (!go)


        Monitor.Wait (locker);

    Console.WriteLine ("Woken!!!");
  }
}

// Output
Woken!!!   (after pressing Enter)

For thread safety, we ensure that all shared fields are accessed within a lock. Hence, we add lock statements around both reading and updating the go flag. This is essential.

The Work method is where we block, waiting for the go flag to become true. The Monitor.Wait method does the following, in order:

  1. Releases the lock on locker

  2. Blocks until locker is "pulsed"

  3. Reacquires the lock on locker

Execution then continues at the next statement. Monitor.Wait is designed for use within a lock statement; it throws an exception if called otherwise. The same goes for Monitor.Pulse.

In the Main method, we signal the worker by setting the go flag (within a lock) and calling PulseAll. As soon as we release the lock, the worker resumes execution, reiterating its while loop.

The Pulse and PulseAll methods release threads blocked on a Wait statement. Pulse releases a maximum of one thread; PulseAll releases them all. In our example, just one thread is blocked, so their effects are identical. With our suggested pattern, call PulseAll if in doubt.

Tip

In order for Wait to communicate with Pulse or PulseAll, the synchronizing object (locker, in our case) must be the same.

In our pattern, pulsing indicates that something might have changed, and that waiting threads should recheck their blocking conditions. In the Work method, this check is accomplished via the while loop. The waiter then decides whether to continue, not the notifier. If pulsing by itself is taken as instruction to continue, the Wait construct is stripped of any real value; you end up with an inferior version of an AutoResetEvent.

If we abandon our pattern, removing the while loop, the go flag, and the ReadLine, we get a bare-bones Wait/Pulse example:

static void Main(  )
{
  new Thread (Work).Start(  );
  lock (locker) Monitor.Pulse (locker);
}

static void Work(  )
{
  lock (locker) Monitor.Wait (locker);
  Console.WriteLine ("Woken!!!");
}

It's not possible to display the output, because it's nondeterministic! A race ensues between the main thread and the worker. If Wait executes first, the signal works. If Pulse executes first, the pulse is lost and the worker remains forever stuck. This differs from the behavior of an AutoResetEvent, where its Set method has a memory or "latching" effect, so it is still effective if called before WaitOne.

The reason Pulse has no latching effect is that you're expected to write the latch yourself, using a "go" flag as we did before. This is what makes Wait and Pulse versatile: with a boolean flag, we can make it function as an AutoResetEvent; with an integer field, we can imitate a Semaphore. With more complex data structures, we can go further and write such constructs as a producer/consumer queue.

Producer/Consumer Queue

A producer/consumer queue is a common requirement in threading. Here's how it works:

  • A queue is set up to describe tasks.

  • When a task needs executing, it's enqueued, allowing the caller to get on with other things.

  • One or more worker threads plug away in the background, picking off and executing queued tasks.

The advantage of this model is that you have precise control over how many worker threads execute at once. This can allow you to limit not only consumption of CPU time, but other resources as well. If the tasks perform intensive disk I/O, for instance, you might have just one worker thread to avoid starving the operating system and other applications. Another type of application may have 20. You can also dynamically add and remove workers throughout the queue's life.

Tip

A producer/consume queue is rather like an independent thread pool.

Here's a producer/consumer queue that uses a string (for simplicity) to represent a task:

using System;
using System.Threading;
using System.Collections.Generic;

public class TaskQueue : IDisposable
{
  object locker = new object(  );
  Thread[] workers;
  Queue<string> taskQ = new Queue<string>(  );

  public TaskQueue (int workerCount)
  {
    workers = new Thread [workerCount];

    // Create and start a separate thread for each worker
    for (int i = 0; i < workerCount; i+)
      (workers [i] = new Thread (Consume)).Start(  );
  }

  public void Dispose(  )
  {
    // Enqueue one null task per worker to make each exit.
    foreach (Thread worker in workers) EnqueueTask (null);
  }

  public void EnqueueTask (string task)
  {
    lock (locker)
    {
      taskQ.Enqueue (task);            // We must pulse because we're
      Monitor.Pulse (locker);          // changing a blocking condition.
    }
  }

  void Consume(  )
  {
    while (true)                        // Keep consuming until
    {                                   // told otherwise
      string task;
      lock (locker)
      {
        while (taskQ.Count == 0) Monitor.Wait (locker);
        task = taskQ.Dequeue(  );
      }
      if (task == null) return;         // This signals our exit
      Console.Write (task);             // Perform task.
      Thread.Sleep (1000);              // Simulate time-consuming task
    }
  }
}

Again we have an exit strategy: enqueuing a null task signals a consumer to finish after completing any outstanding tasks (if we want it to quit sooner, we could use an independent "exit" flag). Because we're supporting multiple consumers, we must enqueue one null task per consumer to completely shut down the queue.

Here's a Main method that starts a task queue, specifying two concurrent consumer threads, and then enqueues 10 tasks to be shared among the two consumers:

static void Main(  )
{
  using (TaskQueue q = new TaskQueue (2))
  {
    for (int i = 0; i < 10; i+)
    q.EnqueueTask (" Task" + i);

    Console.WriteLine ("Enqueued 10 tasks");
    Console.WriteLine ("Waiting for tasks to complete...");
  }

  // Exiting the using statement runs TaskQueue's Dispose method, which
  // shuts down the consumers, after all outstanding tasks are completed.

  Console.WriteLine ("\r\nAll tasks done!");
}

// Output:
Enqueued 10 tasks
Waiting for tasks to complete...
 Task1 Task0 (pause...) Task2 Task3 (pause...) Task4 Task5 (pause...)
 Task6 Task7 (pause...) Task8 Task9 (pause...)
All tasks done!

Let's revisit TaskQueue and examine the Consume method, where a worker picks off and executes a task from the queue. We want the worker to block while there's nothing to do; in other words, when there are no items on the queue. Hence, our blocking condition is taskQ.Count==0:

      string task;
      lock (locker)
      {
        while (taskQ.Count == 0) Monitor.Wait (locker);
        task = taskQ.Dequeue(  );
      }
      if (task == null) return;         // This signals our exit
      Console.Write (task);
      Thread.Sleep (1000);              // Simulate time-consuming task

The while loop exits when taskQ.Count is nonzero, meaning that (at least) one task is outstanding. We must dequeue the task before releasing the lock-otherwise, the task may not be there for us to dequeue; the presence of other threads means things can change while you blink. In particular, another consumer just finishing a previous job could sneak in and dequeue our task if we weren't meticulous with locking.

After the task is dequeued, we release the lock immediately. If we held on to it while performing the task, we would unnecessarily block other consumers and producers. We don't pulse after dequeuing, as no other consumer can ever unblock by there being fewer items on the queue.

Warning

Aim to lock briefly, when using Wait and Pulse, to avoid unnecessarily blocking other threads. Locking across many lines of code is fine-providing they all execute quickly. Remember that you're helped by Monitor.Wait's releasing the underlying lock while awaiting a pulse!

For the sake of efficiency, we call Pulse instead of PulseAll when enqueuing a task. This is because (at most) one consumer need be woken per task. If you had just one ice cream, you wouldn't wake a class of 30 sleeping children to queue for it; similarly, with 30 consumers, there's no benefit in waking them all-only to have 29 spin a useless iteration on their while loop before going back to sleep. We wouldn't break anything functionally, however, by replacing Pulse with PulseAll.

Wait Timeouts

You can specify a timeout when calling Wait, either in milliseconds or as a TimeSpan. The Wait method then returns false if it gave up because of a timeout. The timeout applies only to the waiting phase. Hence, a Wait with a timeout does the following:

  1. Releases the underlying lock

  2. Blocks until pulsed, or the timeout elapses

  3. Reacquires the underlying lock

Specifying a timeout is like asking the CLR to give you a "virtual pulse" after the timeout interval. A timed-out Wait will still perform step 3 and reacquire the lock-just as if pulsed.

Warning

Should Wait block in step 3 (while reacquiring the lock), any timeout is ignored. This is rarely an issue, though, because other threads will lock only briefly in a well-designed Wait/Pulse application. So, reacquiring the lock should be a near-instant operation.

Wait timeouts have a useful application. Sometimes it may be unreasonable or impossible to Pulse whenever an unblocking condition arises. An example might be if a blocking condition involves calling a method that derives information from periodically querying a database. If latency is not an issue, the solution is simple-one can specify a timeout when calling Wait, as follows:

lock (locker)
  while (<blocking-condition> )
    Monitor.Wait (locker, <timeout> );

This forces the blocking condition to be rechecked at the interval specified by the timeout, as well as when pulsed. The simpler the blocking condition, the smaller the timeout can be without creating inefficiency. In this case, we don't care whether the Wait was pulsed or timed out, so we ignore its return value.

The same system works equally well if the pulse is absent due to a bug in the program. It can be worth adding a timeout to all Wait commands in programs where synchronization is particularly complex, as an ultimate backup for obscure pulsing errors. It also provides a degree of bug immunity if the program is modified later by someone not on the Pulse!

Two-Way Signaling

Let's say we want to signal a thread five times in a row:

class Race
{
  static object locker = new object(  );
  static bool go;

  static void Main(  )
  {
    new Thread (SaySomething).Start(  );

    for (int i = 0; i < 5; i+)
      lock (locker) { go = true; Monitor.PulseAll (locker); }
  }

  static void SaySomething(  )
  {
    for (int i = 0; i < 5; i+)
      lock (locker)
      {
        while (!go) Monitor.Wait (locker);
        go = false;
        Console.WriteLine ("Wassup?");
      }
  }
}

// Expected Output:
Wassup?
Wassup?
Wassup?
Wassup?
Wassup?

//Actual Output:
Wassup? (hangs)

This program is flawed: the for loop in the main thread can freewheel right through its five iterations anytime the worker doesn't hold the lock, and possibly before the worker even starts! The producer/consumer example didn't suffer from this problem because if the main thread got ahead of the worker, each request would queue up. But in this case, we need the main thread to block at each iteration if the worker's still busy with a previous task.

We can solve this by adding a ready flag to the class, controlled by the worker. The main thread then waits until the worker's ready before setting the go flag.

Tip

This is analogous to the two-way signaling example in the section "the section called "Signaling with Event Wait Handles" earlier in this chapter.

Here it is:

class Solved
{
  static object locker = new object(  );
  static boolready, go;

  static void Main(  )
  {
    new Thread (SaySomething).Start(  );

    for (int i = 0; i < 5; i+)
      lock (locker)
      {


        while (!ready) Monitor.Wait (locker);
        ready = false;
        go = true;
        Monitor.PulseAll (locker);
      }
  }
  static void SaySomething(  )
  {
    for (int i = 0; i < 5; i+)
      lock (locker)
      {
        ready = true;
        Monitor.PulseAll (locker);            // Remember that calling
        while (!go) Monitor.Wait (locker);    // Monitor.Wait releases
        go = false;                           // and reacquires the lock.
        Console.WriteLine ("Wassup?");
      }
  }
}

// Output:
Wassup? (repeated five times)

In the Main method, we clear the ready flag, set the go flag, and pulse, all in the same lock statement. The benefit of doing this is that it offers robustness if we later introduce a third thread into the equation. Imagine another thread trying to signal the worker at the same time. Our logic is watertight in this scenario; in effect, we're clearing ready and setting go, atomically.

Simulating Wait Handles

You might have noticed a pattern in the preceding example: both waiting loops have the following structure:

lock (locker)
{
  while (!flag) Monitor.Wait (locker);
  flag = false;
 ...
}

where flag is set to true in another thread. This is, in effect, mimicking an AutoResetEvent. If we omitted flag=false, we'd have a ManualResetEvent; if we replaced the flag with an integer field, we'd have a Semaphore.

Simulating the static methods that work across a set of wait handles is, in most cases, easy. The equivalent of calling WaitAll across event wait handles is nothing more than a blocking condition that incorporates all the flags used in place of the wait handles:

lock (locker)
  while (!flag1 & !flag2 & !flag3...)
    Monitor.Wait (locker);

This can be particularly useful given that WaitAll is often unusable due to COM legacy issues. Simulating WaitAny is simply a matter of replacing the & operator with the || operator.

SignalAndWait is trickier. Recall that this method signals one handle while waiting on another in an atomic operation. We have a situation analogous to a distributed database transaction: we need a two-phase commit! Assuming we wanted to signal flagA while waiting on flagB, we'd have to divide each flag into two, resulting in code that might look like this:

lock (locker)
{
  flagAphase1 = true;
  Monitor.Pulse (locker);
  while (!flagBphase1) Monitor.Wait (locker);

  flagAphase2 = true;
  Monitor.Pulse (locker);
  while (!flagBphase2) Monitor.Wait (locker);
}

with additional "rollback" logic to retract flagAphase1 if the first Wait statement threw an exception as a result of being interrupted or aborted. This is a situation where wait handles are easier. True atomic signaling and waiting, however, is actually an unusual requirement.

Interrupt and Abort

All blocking methods-Sleep, Join, EndInvoke, WaitOne, and Wait-block forever if the unblocking condition is never met and no timeout is specified. Occasionally, it can be useful to release a blocked thread prematurely; for instance, when ending an application. Two methods accomplish this:

  • Thread.Interrupt

  • Thread.Abort

The Abort method is also capable of ending a nonblocked thread-stuck, perhaps, in an infinite loop.

Interrupt

Calling Interrupt on a blocked thread forcibly releases it, throwing a ThreadInterruptedException, as follows:

static void Main(  )
{
  Thread t = new Thread (delegate(  )
  {
    try
    {
      Thread.Sleep (Timeout.Infinite);
    }
    catch (ThreadInterruptedException)
    {
      Console.Write ("Forcibly ");
    }
    Console.WriteLine ("Woken!");
  });
  t.Start(  );
  t.Interrupt(  );
}

// Output:
Forcibly Woken!

Interrupting a thread does not cause the thread to end, unless the ThreadInterruptedException is unhandled.

If Interrupt is called on a thread that's not blocked, the thread continues executing until it next blocks, at which point a ThreadInterruptedException is thrown. This avoids the need for the following test:

if ((worker.ThreadState & ThreadState.WaitSleepJoin) > 0)
  worker.Interrupt(  );

which is not thread-safe because of the possibility of preemption between the if statement and worker.Interrupt.

Interrupting a thread arbitrarily is dangerous, however, because any framework or third-party methods in the calling stack could unexpectedly receive the interrupt rather than your intended code. All it would take is for the thread to block briefly on a simple lock or synchronization resource, and any pending interruption would kick in. If the method isn't designed to be interrupted (with appropriate cleanup code in finally blocks), objects could be left in an unusable state or resources incompletely released.

Interrupting a thread is safe when you are sure exactly where the thread is; for instance, through a signaling construct that you monopolize.

Abort

A blocked thread can also be forcibly released via its Abort method. This has an effect similar to calling Interrupt, except that a ThreadAbortException is thrown instead of a ThreadInterruptedException. Furthermore, the exception will be rethrown at the end of the catch block (in an attempt to terminate the thread for good) unless Thread.ResetAbort is called within the catch block. In the interim, the thread has a ThreadState of AbortRequested.

Tip

An unhandled ThreadAbortException does not cause application shutdown, unlike all other types of Exception.

The big difference between Interrupt and Abort is what happens when it's called on a thread that is not blocked. Whereas Interrupt waits until the thread next blocks before doing anything, Abort throws an exception on the thread right where it's executing (unmanaged code excepted). This is a problem because .NET Framework code might be aborted; code that is not abort-safe. This rules out using Abort in almost any nontrivial context.

There are two cases, though, where you can safely use Abort. One is if you are willing to tear down a thread's application domain after it's aborted. A good example of when you might do this is in writing a unit-testing framework. (We discuss application domains fully in Chapter 21, Application Domains.) Another case where you can call Abort safely is on your own thread. We describe this in the following section.

Safe Cancellation

An alternative to aborting another thread is to implement a pattern whereby the worker periodically checks a cancel flag, exiting if the flag is true. To abort, the instigator simply sets the flag, and then waits for the worker to comply:

class ProLife
{
  public static void Main(  )
  {
    RulyWorker w = new RulyWorker(  );
    Thread t = new Thread (w.Work);
    t.Start(  );
    Thread.Sleep (1000);

    Console.WriteLine ("aborting");
    w.Abort(  );                       // Safely abort the worker.
    Console.WriteLine ("aborted");
  }

  public class RulyWorker
  {
    volatile bool abort;
    public void Abort(  ) { abort = true; }

    public void Work(  )
    {
      while (true)
      {
        CheckAbort(  );
        // Do stuff...
        try      { OtherMethod(  ); }
        finally  { /* any required cleanup */ }
      }
    }

    void OtherMethod(  )
    {
      // Do stuff...
      CheckAbort(  );
    }

    void CheckAbort() { if (abort) Thread.CurrentThread.Abort(  ); }
  }
}

The disadvantage is that the worker method must be written explicitly to support cancellation. Nonetheless, this is one of the few safe cancellation patterns.

In our example, the worker calls Abort on its own thread upon noticing that the abort field is true. This is safe because we're aborting from a known place, and it results in a graceful exit up the execution stack (without circumventing code in finally blocks). Throwing a custom exception works equally well, although you must then catch the exception at the top level in your thread entry method to avoid application shutdown (a good idea, anyway, with any type of exception).

The BackgroundWorker helper class supports a similar flag-based cancellation pattern.

Local Storage

Much of this chapter has focused on synchronization constructs and the issues arising from having threads concurrently access the same data. Sometimes, however, you want to keep data isolated, ensuring that each thread has a separate copy. Local variables achieve exactly this, but they are useful only with transient data.

The Thread class provides GetData and SetData methods for storing nontransient isolated data in "slots" whose values persist between method calls. You might be hard-pressed to think of a requirement: data you'd want to keep isolated to a thread tends to be transient by nature. Its main application is for storing "out-of-band" data-that which supports the execution path's infrastructure, such as messaging, transaction, and security tokens. Passing such data around in method parameters is extremely clumsy and alienates all but your own methods; storing such information in static fields means sharing it between all threads.

Thread.GetData reads from a thread's isolated data store; Thread.SetData writes to it. Both methods require a LocalDataStoreSlot object to identify the slot. This is just a wrapper for a string that names the slot; the same slot can be used across all threads and they'll still get separate values. Here's an example:

class Test
{
  // The same LocalDataStoreSlot object can be used across all threads.
  LocalDataStoreSlot secSlot = Thread.GetNamedDataSlot ("securityLevel");

  // This property has a separate value on each thread.
  int SecurityLevel
  {
    get
    {
      object data = Thread.GetData (secSlot);
      return data == null ? 0 : (int) data;    // null == uninitialized
    }
    set { Thread.SetData (secSlot, value); }
  }
  ...

Thread.FreeNamedDataSlot will release a given data slot across all threads, but only once all LocalDataStoreSlot objects of the same name have dropped out of scope and been garbage-collected. This ensures that threads don't get data slots pulled out from under their feet, as long as they keep a reference to the appropriate LocalDataStoreSlot object while the slot is needed.

BackgroundWorker

BackgroundWorker is a helper class in the System.ComponentModel namespace for managing a worker thread. It provides the following features:

  • A cancel flag for signaling a worker to end without using Abort

  • A standard protocol for reporting progress, completion, and cancellation

  • An implementation of IComponent allowing it be sited in Visual Studio's designer

  • Exception handling on the worker thread

  • The ability to update Windows Forms or WPF controls in response to worker progress or completion

The last two features are particularly useful. You don't have to include a try/catch block in your worker method, and you can safely update Windows Forms controls or WPF elements without needing to call Control.Invoke or Dispatcher.Invoke.

BackgroundWorker uses the thread pool, which means you should never call Abort on a BackgroundWorker thread.

Here are the minimum steps in using BackgroundWorker:

  1. Instantiate BackgroundWorker and handle the DoWork event.

  2. Call RunWorkerAsync, optionally with an object argument.

This then sets it in motion. Any argument passed to RunWorkerAsync will be forwarded to DoWork's event handler, via the event argument's Argument property. Here's an example:

class Program
{
  static BackgroundWorker bw = new BackgroundWorker(  );

  static void Main(  )
  {
    bw.DoWork += bw_DoWork;
    bw.RunWorkerAsync ("Message to worker");
    Console.ReadLine(  );
  }

  static void bw_DoWork (object sender, DoWorkEventArgs e)
  {
    // This is called on the worker thread
    Console.WriteLine (e.Argument);        // writes "Message to worker"
    // Perform time-consuming task...
  }
}

BackgroundWorker also provides a RunWorkerCompleted event that fires after the DoWork event handler has done its job. Handling RunWorkerCompleted is not mandatory, but one usually does so in order to query any exception that was thrown in DoWork. Furthermore, code within a RunWorkerCompleted event handler is able to update user interface controls without explicit marshaling; code within the DoWork event handler cannot.

To add support for progress reporting:

  1. Set the WorkerReportsProgress property to true.

  2. Periodically call ReportProgress from within the DoWork event handler with a "percentage complete" value, and optionally, a user-state object.

  3. Handle the ProgressChanged event, querying its event argument's ProgressPercentage property.

  4. Code in the ProgressChanged event handler is free to interact with UI controls just as with RunWorkerCompleted. This is typically where you will update a progress bar.

To add support for cancellation:

  1. Set the WorkerSupportsCancellation property to true.

  2. Periodically check the CancellationPending property from within the DoWork event handler. If it's true, set the event argument's Cancel property to true, and return. (The worker can also set Cancel and exit without CancellationPending being true if it decides that the job is too difficult and it can't go on.)

  3. Call CancelAsync to request cancellation.

Here's an example that implements all the preceding features:

using System;
using System.Threading;
using System.ComponentModel;

class Program
{
  static BackgroundWorker bw;

  static void Main(  )
  {
    bw = new BackgroundWorker(  );
    bw.WorkerReportsProgress = true;
    bw.WorkerSupportsCancellation = true;
    bw.DoWork += bw_DoWork;
    bw.ProgressChanged += bw_ProgressChanged;
    bw.RunWorkerCompleted += bw_RunWorkerCompleted;

    bw.RunWorkerAsync ("Hello to worker");

    Console.WriteLine ("Press Enter in the next 5 seconds to cancel");
    Console.ReadLine(  );
    if (bw.IsBusy) bw.CancelAsync(  );
    Console.ReadLine(  );
  }
  static void bw_DoWork (object sender, DoWorkEventArgs e)
  {
    for (int i = 0; i <= 100; i += 20)
    {
      if (bw.CancellationPending) { e.Cancel = true; return; }
      bw.ReportProgress (i);
      Thread.Sleep (1000);      // Just for the demo... don't go sleeping
    }                           // for real in pooled threads!

    e.Result = 123;    // This gets passed to RunWorkerCompleted
  }

  static void bw_RunWorkerCompleted (object sender,
                                     RunWorkerCompletedEventArgs e)
  {
    if (e.Cancelled)
      Console.WriteLine ("You cancelled!");
    else if (e.Error != null)
      Console.WriteLine ("Worker exception: " + e.Error.ToString(  ));
    else
      Console.WriteLine ("Complete: " + e.Result);      // from DoWork
  }

  static void bw_ProgressChanged (object sender,
                                  ProgressChangedEventArgs e)
  {
    Console.WriteLine ("Reached " + e.ProgressPercentage + "%");
  }
}

// Output:
Press Enter in the next 5 seconds to cancel
Reached 0%
Reached 20%
Reached 40%
Reached 60%
Reached 80%
Reached 100%
Complete: 123

Press Enter in the next 5 seconds to cancel
Reached 0%
Reached 20%
Reached 40%

You cancelled!

Subclassing BackgroundWorker

BackgroundWorker is not sealed and provides a virtual OnDoWork method, suggesting another pattern for its use. In writing a potentially long-running method, you could write an additional version returning a subclassed BackgroundWorker, preconfigured to perform the job concurrently. The consumer then needs to handle only the RunWorkerCompleted and ProgressChanged events. For instance, suppose we wrote a time-consuming method called GetFinancialTotals:

public class Client
{
  Dictionary <string,int> GetFinancialTotals (int foo, int bar) { ... }
  ...
}

We could refactor it as follows:

public class Client
{
  public FinancialWorker GetFinancialTotalsBackground (int foo, int bar)
  {
    return new FinancialWorker (foo, bar);
  }
}

public class FinancialWorker : BackgroundWorker
{
  public Dictionary <string,int> Result;   // You can add typed fields.
  public volatile int Foo, Bar;            // Exposing them via properties
                                           // protected with locks would
  public FinancialWorker(  )                 // also work well.
  {
    WorkerReportsProgress = true;
    WorkerSupportsCancellation = true;
  }

  public FinancialWorker (int foo, int bar) : this(  )
  {
    this.Foo = foo; this.Bar = bar;
  }

  protected override void OnDoWork (DoWorkEventArgs e)
  {
    ReportProgress (0, "Working hard on this report...");

    // Initialize financial report data
    // ...

    while (!<finished report>)
    {
      if (CancellationPending) { e.Cancel = true; return; }
      // Perform another calculation step ...
      // ...
      ReportProgress (percentCompleteCalc, "Getting there...");
    }
    ReportProgress (100, "Done!");
    e.Result = Result = <completed report data>;
  }
}

Whoever calls GetFinancialTotalsBackground then gets a FinancialWorker: a wrapper to manage the background operation with real-world usability. It can report progress, can be canceled, and is friendly with WPF and Windows Forms applications. It's also exception-handled, and it uses a standard protocol (in common with that of anyone else using BackgroundWorker!).

Subclassing BackgroundWorker in this manner yields the benefits of implementing the event-based asynchronous pattern, but in a tidier fashion and with less effort.

ReaderWriterLockSlim

Quite often, instances of a type are thread-safe for concurrent read operations, but not for concurrent updates (nor for a concurrent read and update). This can also be true with resources such as a file. Although protecting instances of such types with a simple exclusive lock for all modes of access usually does the trick, it can unreasonably restrict concurrency if there are many readers and just occasional updates. An example of where this could occur is in a business application server, where commonly used data is cached for fast retrieval in static fields. The ReaderWriterLockSlim class is designed to provide maximum-availability locking in just this scenario.

Tip

ReaderWriterLockSlim is new to Framework 3.5 and is a replacement for the older "fat" ReaderWriterLock class. The latter is similar in functionality, but it is several times slower and has an inherent design fault in its mechanism for handling lock upgrades.

With both classes, there are two basic kinds of lock-a read lock and a write lock:

  • A write lock is universally exclusive.

  • A read lock is compatible with other read locks.

So, a thread holding a write lock blocks all other threads trying to obtain a read or write lock (and vice versa). But if no thread holds a write lock, any number of threads may concurrently obtain a read lock.

ReaderWriterLockSlim defines the following methods for obtaining and releasing read/write locks:

public void EnterReadLock(  );
public void ExitReadLock(  );
public void EnterWriteLock(  );
public void ExitWriteLock(  );

Additionally, there are "Try" versions of all EnterXXX methods that accept timeout arguments in the style of Monitor.TryEnter (timeouts can occur quite easily if the resource is heavily contended). ReaderWriterLock provides similar methods, named AcquireXXX and ReleaseXXX. These throw an ApplicationException if a timeout occurs rather than returning false.

The following program demonstrates ReaderWriterLockSlim. Three threads continually enumerate a list, while two further threads append a random number to the list every second. A read lock protects the list readers, and a write lock protects the list writers:

class SlimDemo
{static ReaderWriterLockSlim rw = new ReaderWriterLockSlim(  );
  static List<int> items = new List<int>(  );
  static Random rand = new Random(  );

  static void Main(  )
  {
    new Thread (Read).Start(  );
    new Thread (Read).Start(  );
    new Thread (Read).Start(  );

    new Thread (Write).Start ("A");
    new Thread (Write).Start ("B");
  }

  static void Read(  )
  {
    while (true)
    {
      rw.EnterReadLock(  );
      foreach (int i in items) Thread.Sleep (10);
      rw.ExitReadLock(  );
    }
  }

  static void Write (object threadID)
  {
    while (true)
    {
      int newNumber = GetRandNum (100);
      rw.EnterWriteLock(  );
      items.Add (newNumber);
      rw.ExitWriteLock(  );
      Console.WriteLine ("Thread " + threadID + " added " + newNumber);
      Thread.Sleep (100);
    }
  }

  static int GetRandNum (int max) { lock (rand) return rand.Next (max); }
}

Tip

In production code, you'd typically add try/finally blocks to ensure that locks were released if an exception was thrown.

Here's the result:

Thread B added 61
Thread A added 83
Thread B added 55
Thread A added 33
...

ReaderWriterLockSlim allows more concurrent Read activity than would a simple lock. We can illustrate this by inserting the following line to the Write method, at the start of the while loop:

Console.WriteLine (rw.CurrentReadCount + " concurrent readers");

This nearly always prints "3 concurrent readers" (the Read methods spend most of their time inside the foreach loops). As well as CurrentReadCount, ReaderWriterLockSlim provides the following properties for monitoring locks:

public bool IsReadLockHeld            { get; }
public bool IsUpgradeableReadLockHeld { get; }
public bool IsWriteLockHeld           { get; }

public int  WaitingReadCount          { get; }
public int  WaitingUpgradeCount       { get; }
public int  WaitingWriteCount         { get; }

public int  RecursiveReadCount        { get; }
public int  RecursiveUpgradeCount     { get; }
public int  RecursiveWriteCount       { get; }

Upgradeable Locks and Recursion

Sometimes it's useful to swap a read lock for a write lock in a single atomic operation. For instance, suppose you want to add an item to a list only if the item wasn't already present. Ideally, you'd want to minimize the time spent holding the (exclusive) write lock, so you might proceed as follows:

  1. Obtain a read lock.

  2. Test if the item is already present in the list, and if so, release the lock and return.

  3. Release the read lock.

  4. Obtain a write lock.

  5. Add the item.

The problem is that another thread could sneak in and modify the list (adding the same item, for instance) between steps 3 and 4. ReaderWriterLockSlim addresses this through a third kind of lock called an upgradeable lock. An upgradeable lock is like a read lock except that it can later be promoted to a write lock in an atomic operation. Here's how you use it:

  1. Call EnterUpgradeableReadLock.

  2. Perform read-based activities (e.g., test whether the item is already present in the list).

  3. Call EnterWriteLock (this converts the upgradeable lock to a write lock).

  4. Perform write-based activities (e.g., add the item to the list).

  5. Call ExitWriteLock (this converts the write lock back to an upgradeable lock).

  6. Perform any other read-based activities.

  7. Call ExitUpgradeableReadLock.

From the caller's perspective, it's rather like nested or recursive locking. Functionally, though, in step 3, ReaderWriterLockSlim releases your read lock and obtains a fresh write lock, atomically.

There's another important difference between upgradeable locks and read locks. While an upgradeable lock can coexist with any number of read locks, only one upgradeable lock can itself be taken out at a time. This prevents conversion deadlocks by serializing competing conversions-just as update locks do in SQL Server:

SQL Server

ReaderWriterLockSlim

Share lock

Read lock

Exclusive lock

Write lock

Update lock

Upgradeable lock

We can demonstrate an upgradeable lock by changing the Write method in the preceding example such that it adds a number to list only if not already present:

Tip

ReaderWriterLock can also do lock conversions-but unreliably because it doesn't support the concept of upgradeable locks. This is why the designers of ReaderWriterLockSlim had to start afresh with a new class.

Lock recursion

Ordinarily, nested or recursive locking is prohibited with ReaderWriterLockSlim. Hence, the following throws an exception:

var rw = new ReaderWriterLockSlim(  );
rw.EnterReadLock(  );
rw.EnterReadLock(  );
rw.ExitReadLock(  );
rw.ExitReadLock(  );

It runs without error, however, if you construct ReaderWriterLockSlim as follows:

var rw = new ReaderWriterLockSlim (LockRecursionPolicy.SupportsRecursion);

This ensures that recursive locking can happen only if you plan for it. Recursive locking can bring undesired complexity because it's possible to acquire more than one kind of lock:

rw.EnterWriteLock(  );
rw.EnterReadLock(  );
Console.WriteLine (rw.IsReadLockHeld);     // True
Console.WriteLine (rw.IsWriteLockHeld);    // True
rw.ExitReadLock(  );
rw.ExitWriteLock(  );

The basic rule is that once you've acquired a lock, subsequent recursive locks can be less, but not greater, on the following scale:

Read Lock → Upgradeable Lock → Write Lock

A request to promote an upgradeable lock to a write lock, however, is always legal.

Timers

If you need to execute some method repeatedly at regular intervals, the easiest way is with a timer. Timers are convenient, and they are efficient in their use of memory and resources-compared with techniques such as the following:

new Thread (delegate(  ) {
                         while (enabled)
                         {
                           DoSomeAction(  );
                           Thread.Sleep (TimeSpan.FromHours (24));
                         }
                       }).Start(  );

Not only does this permanently tie up a thread resource, but without additional coding, DoSomeAction will happen at a later time each day. Timers solve these problems.

The .NET Framework provides four timers. Two of these are general-purpose multithreaded timers:

  • System.Threading.Timer

  • System.Timers.Timer

The other two are special-purpose single-threaded timers:

  • System.Windows.Forms.Timer (Windows Forms timer)

  • System.Windows.Threading.DispatcherTimer (WPF timer)

The multithreaded timers are more powerful, accurate, and flexible; the single-threaded timers are safer and more convenient for running simple tasks that update Windows Forms controls or WPF elements.

Multithreaded Timers

System.Threading.Timer is the simplest multithreaded timer: it has just a constructor and two methods (a delight for minimalists, as well as book authors!). In the following example, a timer calls the Tick method, which writes "tick..." after five seconds have elapsed, and then every second after that, until the user presses Enter:

using System;
using System.Threading;

class Program
{
  static void Main(  )
  {
    // First interval = 5000ms; subsequent intervals = 1000msTimer tmr = new Timer (Tick, "tick...", 5000, 1000);
    Console.ReadLine(  );
    tmr.Dispose(  );                     // Ends the timer
  }

  static void Tick (object data)
  {
    // This runs on a pooled thread
    Console.WriteLine (data);          // Writes "tick..."
  }
}

You can change a timer's interval later by calling its Change method. If you want a timer to fire just once, specify Timeout.Infinite in the constructor's last argument.

The .NET Framework provides another timer class of the same name in the System.Timers namespace. This simply wraps the System.Threading.Timer, providing additional convenience while using the identical underlying engine. Here's a summary of its added features:

  • A Component implementation, allowing it to be sited in Visual Studio's designer

  • An Interval property instead of a Change method

  • An Elapsedevent instead of a callback delegate

  • An Enabled property to start and stop the timer (its default value being false)

  • Start and Stop methods in case you're confused by Enabled

  • An AutoReset flag for indicating a recurring event (default value is true)

Here's an example:

using System;
using System.Timers;   // Timers namespace rather than Threading

class SystemTimer
{
  static void Main(  )
  {
    Timer tmr = new Timer(  );       // Doesn't require any args
    tmr.Interval = 500;
    tmr.Elapsed += tmr_Elapsed;    // Uses an event instead of a delegate
    tmr.Start(  );                   // Start the timer
    Console.ReadLine(  );
    tmr.Stop(  );                    // Stop the timer
    Console.ReadLine(  );
    tmr.Start(  );                   // Restart the timer
    Console.ReadLine(  );
    tmr.Dispose(  );                 // Permanently stop the timer
  }

  static void tmr_Elapsed (object sender, EventArgs e)
  {
    Console.WriteLine ("Tick");
  }
}

Multithreaded timers use the thread pool to allow a few threads to serve many timers. This means that the callback method or Tick event may fire on a different thread each time it is called. Furthermore, a Tick always fires on time-regardless of whether the previous Tick has finished executing. Hence, callbacks or event handlers must be thread-safe.

The precision of multithreaded timers depends on the operating system, and is typically in the 10-20 milliseconds region. If you need greater precision, you can use P/Invoke interop and call the Windows multimedia timer. This has precision down to 1 ms and it is defined in winmm.dll. First call timeBeginPeriod to inform the operating system that you need high timing precision, and then call timeSetEvent to start a multimedia timer. When you're done, call timeKillEvent to stop the timer and timeEndPeriod to inform the OS that you no longer need high timing precision. Chapter 22, Integrating with Native DLLs demonstrates calling external methods with P/Invoke. You can find complete examples on the Internet that use the multimedia timer by searching for the keywords dllimport winmm.dll timesetevent.

Single-Threaded Timers

The .NET Framework provides timers designed to eliminate thread-safety issues for Windows Forms and WPF applications:

  • System.Windows.Forms.Timer (Windows Forms)

  • System.Windows.Threading.DispatcherTimer (WPF)

Warning

The single-threaded timers are not designed to work outside their respective environments. If you use a Windows Forms timer in a Windows Service application, for instance, the Timer event won't fire!

Both are like System.Timers.Timer in the members that they expose (Interval, Tick, Start, and Stop) and are used in a similar manner. However, they differ in how they work internally. Instead of using the thread pool to generate timer events, the Windows Forms and WPF timers rely on the message pumping mechanism of their underlying user interface model. This means that the Tick event always fires on the same thread that originally created the timer-which, in a normal application, is the same thread used to manage all user interface elements and controls. This has a number of benefits:

  • You can forget about thread safety.

  • A fresh Tick will never fire until the previous Tick has finished processing.

  • You can update user interface elements and controls directly from Tick event handling code, without calling Control.Invoke or Dispatcher.Invoke.

It sounds too good to be true, until you realize that a program employing these timers is not really multithreaded-there is no parallel execution. One thread serves all timers-as well as the processing UI events. This brings us to the disadvantage of single-threaded timers:

  • Unless the Tick event handler executes quickly, the user interface becomes unresponsive.

This makes the Windows Forms and WPF timers suitable for only small jobs, typically those that involve updating some aspect of the user interface (e.g., a clock or countdown display). Otherwise, you need a multithreaded timer.

In terms of precision, the single-threaded timers are similar to the multithreaded timers (tens of milliseconds), although they are typically less accurate, because they can be delayed while other user interface requests (or other timer events) are processed.