March 2010

Volume 25 Number 03

Finger Style - Exploring Multi-Touch Support in Silverlight

By Charles Petzold | March 2010

Whenever I visit the American Museum of Natural History in New York City, I always make a point to drop in on the Hall of Primates. With a large selection of skeletons and stuffed specimens, the hall presents an evolutionary panorama of the Primate order—animals ranging in size from tiny tree shrews, lemurs and marmosets, through chimpanzees, great apes and humans.

What leaps out from this exhibit is a striking characteristic common to all primates: the bone structure of the hand, including an opposable thumb. The same arrangement of joints and digits that allowed our ancestors and distant cousins to grasp and climb tree branches lets our species manipulate the world around us, and build things. Our hands may have their origins in the paws of tiny primates tens of millions of years ago, yet they are also a major factor in what makes us distinctly human.

Is it any wonder we reach out instinctively to point at or even touch objects on the computer screen?

In response to this human desire to bring our fingers into more intimate connection with the computer, our input devices have been evolving as well. The mouse is terrific for selecting and dragging, but hopeless for freeform sketching or handwriting. The tablet stylus lets us write but often feels awkward for stretching or moving. Touch screens are familiar from ATMs and museum kiosks, but are usually restricted to simple pointing and pressing.

I think the technology known as “multi-touch” represents a big leap forward. As the name implies, multi-touch goes beyond touch screens of the past to detect multiple fingers, and this makes a huge difference in the types of movement and gestures that can be conveyed through the screen. Multi-touch has evolved from the touch-oriented input devices of the past, but at the same time suggests an intrinsically different input paradigm.

Multi-touch has probably been most evident on television news broadcasts, with maps on large screen manipulated by the resident meteorologist or pundit. Microsoft has been exploring multi-touch in several ways—from the coffee-table-size Microsoft Surface computer to small devices like the Zune HD—and the technology is becoming fairly standard on smartphones as well.

While Microsoft Surface can respond to many simultaneous fingers (and even contains internal cameras to view objects placed on the glass), most other multi-touch devices are limited to a discrete number. Many respond to only two fingers—or touch points, as they’re called. (I will be using finger and touch point fairly synonymously.) But synergy is at work here: On the computer screen, two fingers are more than twice as powerful as one.

The limitation of two touch points is characteristic of the multi-touch monitors that have become available recently for desktop PCs and laptops, as well as the customized Acer Aspire 1420P laptop distributed to attendees at the Microsoft Professional Developers Conference (PDC) last November—commonly referred to as the PDC laptop. The distribution of the PDC laptop provided a unique opportunity for thousands of developers to write multi-touch-aware applications.

The PDC laptop is the machine I used to explore multi-touch support under Silverlight 3.

Silverlight Events and Classes

Multi-touch support is becoming standard in the various Windows APIs and frameworks. Support is built into Windows 7 and the forthcoming Windows Presentation Foundation (WPF) 4. (The Microsoft Surface computer is based around WPF as well, but includes custom extensions for its very special capabilities.)

For this article I’d like to focus on the multi-touch support in Silverlight 3. The support is a little on the light side, but it’s certainly adequate, and very useful for exploring basic multi-touch concepts.

If you publish a multi-touch Silverlight application to your Web site, who will be able to use it? The user will need a multi-touch monitor, of course, but will also need to be running the Silverlight application under an OS and browser that support multi-touch. For now, Internet Explorer 8 running under Windows 7 provides this support, and likely more OSes and browsers will support multi-touch in the future.

The Silverlight 3 support for multi-touch consists of five classes, one delegate, one enumeration and a single event. There is no way to determine if your Silverlight program is running on a multi-touch device or, if it is, how many touch points the device supports.

A Silverlight application that wants to respond to multi-touch must attach a handler to the static Touch.FrameReported event:

Touch.FrameReported += OnTouchFrameReported;

You can attach this event handler on machines that don’t have multi-touch monitors and nothing bad will happen. The FrameReported event is the only public member of the static Touch class. The handler looks like this:

void OnTouchFrameReported(
  object sender, TouchFrameEventArgs args) {

You can install multiple Touch.FrameReported event handlers in your application, and all of them will report all touch events anywhere in the application.

TouchFrameEventArgs has one public property named TimeStamp that I haven’t had occasion to use, and three essential public methods:

  • TouchPoint GetPrimaryTouchPoint(UIElement relativeTo)
  • TouchPointCollection GetTouchPoints(UIElement relativeTo)
  • void SuspendMousePromotionUntilTouchUp()

The argument to GetPrimaryTouchPoint or GetTouchPoints is used solely for reporting position information in the TouchPoint object. You can use null for this argument; positioning information will then be relative to the upper-left corner of the entire Silverlight application.

Multi-touch supports multiple fingers touching the screen, and each finger touching the screen (up to the maximum number, which currently is usually two) is a touch point. The primary touch point refers to the finger that touches the screen when no other fingers are touching the screen and the mouse button is not pressed.

Touch a finger to the screen. That’s the primary touch point. With the first finger still touching the screen, put a second finger on the screen. Obviously that second finger is not a primary touch point. But now, with the second finger still on the screen, lift the first finger and put it back on the screen. Is that a primary touch point? No, it’s not. A primary touch point occurs only when no other fingers are touching the screen.

A primary touch point maps onto the touch point that will be promoted to the mouse. In real multi-touch applications, you should be careful not to rely on the primary touch point, because the user will typically not attach specific significance to the first touch.

Events are fired only for fingers actually touching the screen. There is no hover detection for fingers very close to the screen, but not touching.

By default, activity involving the primary touch point is promoted to various mouse events. This allows your existing applications to respond to touch without any special coding. Touching the screen becomes a MouseLeftButtonDown event, moving the finger while it’s still touching the screen becomes a MouseMove, and lifting the finger is a MouseLeftButtonUp.

The MouseEventArgs object that accompanies mouse messages includes a property named StylusDevice that helps differentiate mouse events from stylus and touch events. It is my experience with the PDC laptop that the DeviceType property equals TabletDeviceType.Mouse when the event comes from the mouse, and TabletDeviceType.Touch regardless of whether the screen is touched with the finger or the stylus.

Only the primary touch point is promoted to mouse events, and—as the name of the third method of TouchFrameEventArgs suggests—you can inhibit that promotion. More on this shortly.

A particular Touch.FrameReported event might be fired based on one touch point or multiple touch points. The TouchPointCollection returned from the GetTouchPoints method contains all the touch points associated with a particular event. The TouchPoint returned from GetPrimaryTouchPoint is always a primary touch point. If there is no primary touch point associated with the particular event, GetPrimaryTouchPoint will return null.

Even if the TouchPoint returned from GetPrimaryTouchPoint is non-null, it will not be the same object as one of the TouchPoint objects returned from GetTouchPoints, although all the properties will be the same if the argument passed to the methods is the same.

The TouchPoint class defines the following four get-only properties, all backed by dependency properties:

  • Action of type TouchAction, an enumeration with members Down, Move and Up.
  • Position of type Point relative to the element passed as an argument to the GetPrimaryTouchPoint or GetTouchPoints method (or relative to the upper-left corner of the application for an argument of null).
  • Size of type Size. Size information is not available on the PDC laptop so I didn’t work with this property at all.
  • TouchDevice of type TouchDevice.

You can call the SuspendMousePromotionUntilTouchUp method from the event handler only when GetPrimaryTouchPoint returns a non-null object and the Action property equals TouchAction.Down.

The TouchDevice object has two get-only properties also backed by dependency properties:

  • DirectlyOver of type UIElement—the topmost element underneath the finger.
  • Id of type int.

DirectlyOver need not be a child of the element passed to GetPrimaryTouchPoint or GetTouchPoints. This property can be null if the finger is within the Silverlight application (as defined by the dimensions of the Silverlight plug-in object), but not within an area encompassed by a hit-testable control. (Panels with a null background brush are not hit-testable.)

The ID property is crucial for distinguishing among multiple fingers. A particular series of events associated with a particular finger will always begin with an Action of Down when the finger touches the screen, followed by Move events, finishing with an Up event. All these events will be associated with the same ID. (But don’t assume that a primary touch point will have an ID value of 0 or 1.)

Most non-trivial multi-touch code will make use of a Dictionary collection where the ID property of TouchDevice is the dictionary key. This is how you will store information for a particular finger across events.

Examining the Events

When exploring a new input device, it’s always helpful to write a little application that logs the events on the screen so you can get an idea of what they’re like. Among the downloadable code accompanying this article is a project named MultiTouchEvents. This project consists of two side-by-side TextBox controls showing the multi-touch events for two fingers. If you have a multi-touch monitor you can run this program at

The XAML file consists of just a two-column Grid containing two TextBox controls named txtbox1 and txtbox2. The code file is shown in Figure 1.

Figure 1 Code for MultiTouchEvents

using System;
using System.Collections.Generic;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Input;

namespace MultiTouchEvents {
  public partial class MainPage : UserControl {
    Dictionary<int, TextBox> touchDict = 
      new Dictionary<int, TextBox>();

    public MainPage() {
      Touch.FrameReported += OnTouchFrameReported;

    void OnTouchFrameReported(
      object sender, TouchFrameEventArgs args) {

      TouchPoint primaryTouchPoint = 

      // Inhibit mouse promotion
      if (primaryTouchPoint != null && 
        primaryTouchPoint.Action == TouchAction.Down)

      TouchPointCollection touchPoints = 

      foreach (TouchPoint touchPoint in touchPoints) {
        TextBox txtbox = null;
        int id = touchPoint.TouchDevice.Id;
        // Limit touch points to 2
        if (touchDict.Count == 2 && 
          !touchDict.ContainsKey(id)) continue;

        switch (touchPoint.Action) {
          case TouchAction.Down:
            txtbox = touchDict.ContainsValue(txtbox1) ? 
              txtbox2 : txtbox1;
            touchDict.Add(id, txtbox);

          case TouchAction.Move:
            txtbox = touchDict[id];
          case TouchAction.Up:
            txtbox = touchDict[id];

        txtbox.Text += String.Format("{0} {1} {2}\r\n", 
          touchPoint.TouchDevice.Id, touchPoint.Action, 
        txtbox.Select(txtbox.Text.Length, 0);

Notice the dictionary definition at the top of the class. The dictionary keeps track of which TextBox is associated with the two touch point IDs.

The OnTouchFrameReported handler begins by inhibiting all mouse promotion. That’s the only reason for the call to GetPrimaryTouchPoint, and very often the only reason you’ll be calling this method in a real program.

A foreach loop enumerates through the TouchPoint members of the TouchPointCollection returned from GetTouchPoints. Because the program contains only two TextBox controls and is only equipped to handle two touch points, it ignores any touch point where the dictionary already has two and the ID is not in that dictionary. (Just as you want your multi-touch-aware Silverlight program to handle multiple fingers, you don’t want it to crash if it encounters too many fingers!) The ID is added to the dictionary on a Down event, and removed from the dictionary on an Up event.

You’ll notice that at times the TextBox controls get bogged down with too much text, and you’ll need to select all the text and delete it (Ctrl-A, Ctrl-X) to get the program running smoothly again.

What you’ll notice from this program is that multi-touch input is captured on an application level. For example, if you press your finger on the application, and then move it off the application, the application will continue to receive Move events and eventually an Up event when you lift your finger up. In fact, once an application is getting some multi-touch input, multi-touch input to other applications is inhibited, and the mouse cursor disappears.

This application-centric capturing of multi-touch input allows the MultiTouchEvents application to be very sure of itself. For example, on Move and Down events the program simply assumes that the ID will be in the dictionary. In a real application, you might want more bullet-proofing just in case something odd happens, but you’ll always get the Down event.

Two-Finger Manipulation

One of the standard multi-touch scenarios is a photo gallery that lets you move, resize and rotate photos with your fingers. I decided to try something similar—just to give myself a little familiarity with the principles involved—but simpler as well. My version of the program has only a single item to manipulate, a text string of the word “TOUCH.”  You can run the TwoFingerManipulation program on my Web site at

When you code an application for multi-touch, you’ll probably always inhibit mouse promotion for multi-touch-aware controls. But to make your program usable without a multi-touch monitor, you’ll also add specific mouse processing.

If you have only a mouse or a single finger, you can still move the string within the TwoFingerManipulation program, but you can change only its position—the graphical operation known as translation. With two fingers on a multi-touch screen, you can also scale the object and rotate it.

When I sat down with a pad and pen to figure out the algorithm I’d need for this scaling and rotation, it soon became obvious that there was no unique solution!

Suppose one finger remains fixed at the point ptRef. (All points here are relative to a display surface underneath the object being manipulated.) The other finger moves from the point ptOld to ptNew. As shown in Figure 2, you can use these three points solely to calculate horizontal and vertical scaling factors for the object.

Figure 2 Two-Finger Movement Converted to Scaling Factors

For example, horizontal scaling is the increase in the distance of ptOld.X and ptNew.X from ptRef.X, or:

scaleX = (ptNew.X – ptRef.X) / (ptOld.X – ptRef.X)

Vertical scaling is similar. For the example in Figure 2, the horizontal scaling factor is 2 and the vertical scaling factor is ½.

This is certainly the easier way to code it. Yet, the program seems to function more naturally if the two fingers rotate the object as well. This is shown in Figure 3.

Figure 3 Two-Finger Movement Converted to Rotation and Scaling

First, the angles of the two vectors—from ptRef to ptOld, and from ptRef to ptNew—are calculated. (The Math.Atan2 method is ideal for this job.) Then ptOld is rotated relative to ptRef by the difference in these angles. This rotated ptOld is then used with ptRef and ptNew to calculate scaling factors. These scaling factors are much less because a rotation component has been removed.

The actual algorithm (implemented in the ComputeMoveMatrix method in the C# file) turned out to be fairly easy. However, the program also required a bunch of transform support code to compensate for the deficiencies of the Silverlight transform classes, which have no public Value property or matrix multiplication as in the WPF.

In the actual program, both fingers can be moving at the same time, and handling the interaction between the two fingers is simpler than it initially seems. Each moving finger is handled independently using the other finger as the reference point. Despite the increased complexity of the calculation, the result seems more natural and I think there’s a simple explanation: In real life, it is very common to rotate objects with your fingers, but very unusual to scale them.

Rotation is so common in the real world that it might make sense to implement it when an object is manipulated by only one finger or the mouse. This is demonstrated in the alternative AltFingerManipulation program (runnable at For two fingers, the program works the same as TwoFingerManipulation. For one finger, it calculates a rotation relative to the center of the object, and then uses any excess movement away from the center for translation.

Wrapping the Event with More Events

Generally I like to work with classes that Microsoft thoughtfully provides in a framework rather than wrapping them in my own code. But I had in mind some multi-touch applications I thought would benefit from a more sophisticated event interface.

I wanted first a more modular system. I wanted to mix custom controls that would handle their own touch input with existing Silverlight controls that simply let touch input be converted to mouse input. I also wanted to implement capture. Although the Silverlight application itself captures the multi-touch device, I wanted individual controls to independently capture a particular touch point.

I also wanted Enter and Leave events. In a sense, these events are the opposite of a capture paradigm. To understand the difference, imagine an on-screen piano keyboard where each key is an instance of the PianoKey control. At first you might think of these keys like mouse-triggered buttons. On a mouse down event the piano key turns a note on, and on a mouse up event it turns the note off.

But that’s not what you want for piano keys. You want the ability to run your finger up and down the keyboard to make glissando effects. The keys really shouldn’t even bother with Down and Up events. They’re really only concerned with Enter and Leave events.

WPF 4 and Microsoft Surface already have routed touch events, and they’re likely coming to Silverlight in the future. But I met my current needs with a class I called TouchManager, implemented in the Petzold.MultiTouch library project in the TouchDialDemos solution. A large portion of TouchManager consists of static methods, fields, and a static handler for the Touch.FrameReported event that allows it to manage touch events throughout an application.

A class that wants to register with TouchManager creates an instance like so:

TouchManager touchManager = new TouchManager(element);

The constructor argument is of type UIElement, and usually it will be the element creating the object:

TouchManager touchManager = new TouchManager(this);

By registering with TouchManager, the class indicates that it is interested in all multi-touch events where the DirectlyOver property of TouchDevice is a child of the element passed to the TouchManager constructor, and that these multi-touch events should not be promoted to mouse events. There is no way to unregister an element.

After creating a new instance of TouchManager, a class can install handlers for events named TouchDown, TouchMove, TouchUp, TouchEnter, TouchLeave and LostTouchCapture:

touchManager.TouchEnter += OnTouchEnter;

All handlers are defined in accordance with the EventHandler<TouchEventArgs> delegate:

void OnTouchEnter(
  object sender, TouchEventArgs args) {

TouchEventArgs defines four properties:

  • Source of type UIElement, which is the element originally passed to the TouchManager constructor.
  • Position of type Point. This position is relative to Source.
  • DirectlyOver of type UIElement, simply copied from the TouchDevice object.
  • Id of type int, also just copied from the TouchDevice object.

Only while processing the TouchDown event is a class allowed to call the Capture method with the touch point ID associated with that event:


All further touch input for that ID goes to the element associated with this TouchManager instance until the TouchUp event or an explicit call to ReleaseTouchCapture. In either case, TouchManager fires the LostTouchCapture event.

The events are generally in the order: TouchEnter, TouchDown, TouchMove, TouchUp, TouchLeave and LostTouchCapture (if applicable). Of course there can be multiple TouchMove events between TouchDown and TouchUp. When a touch point is not captured, there can be multiple events in the order TouchLeave, TouchEnter and TouchMove as the touch point leaves one registered element and enters another.

The TouchDial Control

Changes in user-input paradigms often require you to question old assumptions about the proper design of controls and other input mechanisms. For example, few GUI controls are as solidly entrenched as the scrollbar and slider. You use these controls to navigate large documents or images, but also as tiny volume controls on media players.

As I considered making an on-screen volume control that would respond to touch, I wondered if the old approach was really the correct one. In the real world, sliders are sometimes used as volume controls, but generally restricted to professional mixing panels or graphic equalizers. Most volume controls in the real world are dials. Might a dial be a better solution for a touch-enabled volume control?

I won’t pretend I have the definitive answer, but I’ll show you how to build one.

The TouchDial control is included in the Petzold.MultiTouch library in the TouchDialDemos solution (see the code download for details). TouchDial derives from RangeBase so it can take advantage of the Minimum, Maximum and Value properties—including the coercion logic to keep Value within the Minimum and Maximum range—and the ValueChanged event. But in TouchDial, the Minimum, Maximum and Value properties are all angles in units of degrees.

TouchDial responds to both mouse and touch, and it uses the TouchManager class to capture a touch point. With either the mouse or touch input, TouchDial changes the Value property during a Move event based on the new location and previous location of the mouse or finger relative to a center point. The action is quite similar to Figure 3 except that no scaling is involved. The Move event uses the Math.Atan2 method to convert Cartesian coordinates to angles, and then adds the difference in the two angles to Value.

TouchDial does not include a default template, and hence has no default visual appearance. When using TouchDial, your job is to supply a template, but it can be as simple as a few elements. Obviously something on this template should probably rotate in accordance with changes in the Value property. For convenience, TouchDial supplies a get-only RotateTransform property where the Angle property is equal to the Value property of the RangeBase, and the CenterX and CenterY properties reflect the center of the control.

Figure 4 shows a XAML file with two TouchDial controls that reference a style and template defined as a resource.

Figure 4 The XAML File for the SimpleTouchDialTemplate Project

<UserControl x:Class="SimpleTouchDialTemplate.MainPage"
    <Style x:Key="touchDialStyle" 
      <Setter Property="Maximum" Value="180" />
      <Setter Property="Minimum" Value="-180" />
      <Setter Property="Width" Value="200" />
      <Setter Property="Height" Value="200" />
      <Setter Property="HorizontalAlignment" Value="Center" />
      <Setter Property="VerticalAlignment" Value="Center" />
      <Setter Property="Template">
          <ControlTemplate TargetType="multitouch:TouchDial">
              <Ellipse Fill="{TemplateBinding Background}" />
              <Grid RenderTransform="{TemplateBinding RotateTransform}">
                <Rectangle Width="20" Margin="10"
                  Fill="{TemplateBinding Foreground}" />
  <Grid x:Name="LayoutRoot">
      <ColumnDefinition Width="*" />
      <ColumnDefinition Width="*" />
    <multitouch:TouchDial Grid.Column="0"
      Background="Blue" Foreground="Pink"
      Style="{StaticResource touchDialStyle}" />
    <multitouch:TouchDial Grid.Column="1"
      Background="Red" Foreground="Aqua"
      Style="{StaticResource touchDialStyle}" />

Notice that the style sets the Maximum property to 180 and the Minimum to -180 to allow the bar to be rotated 180 degrees to the left and right. (Oddly, the program did not function correctly when I switched the order of those two properties in the style definition.) The dial consists simply of a bar made from a Rectangle element within an Ellipse. The Bar is inside a single-cell Grid, which has its RenderTransform bound to the RotateTransform property calculated by TouchDial.

The SimpleTouchDialTemplate program is shown running in Figure 5.

Figure 5 The SimpleTouchDialTemplate Program

You can try it out (along with the next two programs I’ll be discussing here) at

Turning the bar within the circle is a little awkward with the mouse and feels much more natural with a finger. Notice that you can turn the bar when you press the left mouse button (or put your finger on the screen) anywhere within the circle. While turning the bar, you can move the mouse or finger away because both are captured.

If you want to restrict the user from turning the bar unless the mouse or finger is pressed directly over the bar, you can set the IsHitTestVisible property of the Ellipse to False.

My first version of the TouchDial control didn’t include the RotateTransform property. It made more sense to me that the template could include an explicit RotateTransform where the Angle property was the target of a TemplateBinding to the Value property of the control. However, in Silverlight 3, bindings don’t work on properties of classes not derived from FrameworkElement, so the Angle property of RotateTransform can’t be a binding target (this is fixed in Silverlight 4).

Rotation is always in reference to a center point, and that little fact complicates the TouchDial control. TouchDial uses a center point in two ways: to calculate the angles shown in Figure 3, and also to set the CenterX and CenterY properties of the RotateTransform it creates. By default, TouchDial calculates both centers as half the ActualWidth and ActualHeight properties, which is the center of the control, but there are very many cases where that’s not quite what you want.

For example, in the template in Figure 4, suppose you want to bind the RenderTransform property of the Rectangle to the RotateTransform property of TouchDial. It won’t work correctly because TouchDial is setting the CenterX and CenterY properties of RotateTransform to 100, but the center of the Rectangle relative to itself is actually the point (10, 90). To let you override these defaults that TouchDial calculates from the size of the control, the control defines RenderCenterX and RenderCenterY properties. In the SimpleTouchDialTemplate property you can set these properties in the style like so:

<Setter Property="RenderCenterX" Value="10" />
<Setter Property="RenderCenterY" Value="90" />

Or, you can set these properties to zero and set the RenderTransformOrigin of the Rectangle element to indicate the center relative to itself:

RenderTransformOrigin="0.5 0.5"

You might also want to use TouchDial in cases where the point used to reference the mouse or finger movement isn’t in the center of the control. In that case, you can set the InputCenterX and InputCenterY properties to override the defaults.

Figure 6 shows the OffCenterTouchDial project XAML file.

Figure 6 The OffCenterTouchDial XAML File

<UserControl x:Class="OffCenterTouchDial.MainPage"
  <Grid x:Name="LayoutRoot">
    <multitouch:TouchDial Width="300" Height="200" 
      HorizontalAlignment="Center" VerticalAlignment="Center"
      Minimum="-20" Maximum="20"
      InputCenterX="35" InputCenterY="100"
      RenderCenterX="15" RenderCenterY="15">
        <ControlTemplate TargetType="multitouch:TouchDial">
          <Grid Background="Pink">
            <Rectangle Height="30" Width="260"
              RadiusX="15" RadiusY="15" Fill="Lime"
              RenderTransform="{TemplateBinding RotateTransform}" />
            <Ellipse Width="10" Height="10"
              Fill="Black" HorizontalAlignment="Left"
              Margin="30" />

This file contains a single TouchDial control where properties are set on the control itself, and the Template property is set to a Control template containing a single-cell Grid with a Rectangle and Ellipse. The Ellipse is a tiny symbolic pivot point for the Rectangle, which you can swivel up and down by 20 degrees, as shown in Figure 7.

Figure 7 The OffCenterTouchDial Program

The InputCenterX and InputCenterY properties are always relative to the entire control, so they indicate the location of the center of the Ellipse element within the pink Grid. The RenderCenterX and RenderCenterY properties are always relative to the part of the control to which the RotateTransform property is applied.

Volume Controls and Pitch Pipes

The two previous examples demonstrate how you can give a visual appearance to TouchDial by either setting the Template property explicitly in markup or, if you need to share templates among multiple controls, by referencing a ControlTemplate defined as a resource.

You can also derive a new class from TouchDial and use the XAML file solely for setting a template. This is the case with the RidgedTouchDial in the Petzold.MultiTouch library. RidgedTouchDial is the same as TouchDial except it has a specific size and visual appearance (which you’ll see shortly).

It is also possible to use TouchDial (or a derived class like RidgedTouchDial) within a class derived from UserControl. The advantage of this approach is that you can hide all the properties defined by RangeBase, including Minimum, Maximum and Value, and replace them with a new property.

This is the case with VolumeControl. VolumeControl derives from RidgedTouchDial for its visual appearance and defines a new property named Volume. The Volume property is backed by a dependency property and any changes to that property fire a VolumeChanged event.

The XAML file for VolumeControl simply references the RidgedTouchDial control and sets several properties, including Minimum, Maximum and Value:

  Background="{Binding Background}"
  ValueChanged="OnTouchDialValueChanged" />

Thus, the TouchDial can rotate through 300 degrees from the minimum position to the maximum position. Figure 8 shows the VolumeControl.xaml.cs. The control translates the 300 degree range of the dial into the logarithmic decibel scale 0 through 96.

Figure 8 The C# File for VolumeControl

using System;
using System.Windows;
using System.Windows.Controls;

namespace Petzold.MultiTouch {
  public partial class VolumeControl : UserControl {
    public static readonly DependencyProperty VolumeProperty =
      new PropertyMetadata(0.0, OnVolumeChanged));

    public event DependencyPropertyChangedEventHandler VolumeChanged;

    public VolumeControl() {
      DataContext = this;

    public double Volume {
      set { SetValue(VolumeProperty, value); }
      get { return (double)GetValue(VolumeProperty); }

    void OnTouchDialValueChanged(object sender, 
      RoutedPropertyChangedEventArgs<double> args) {

      Volume = 96 * (args.NewValue + 150) / 300;

    static void OnVolumeChanged(DependencyObject obj, 
      DependencyPropertyChangedEventArgs args) {

      (obj as VolumeControl).OnVolumeChanged(args);

    protected virtual void OnVolumeChanged(
      DependencyPropertyChangedEventArgs args) {

      touchDial.Value = 300 * Volume / 96 - 150;

      if (VolumeChanged != null)
        VolumeChanged(this, args);

Why 96? Well, although the decibel scale is based on decimal numbers—whenever the amplitude of a signal increases by a multiplicative factor of 10, the loudness increases linearly by 20 decibels—it is also true that 10 to the 3rd power is approximately 2 to the 10th power. This means that when the amplitude doubles, the loudness increases by 6 decibels. Therefore, if you represent amplitude with a 16-bit value—which is the case with CD and PC sound—you get a range of 16 bits times 6 decibels per bit, or 96 decibels.

The PitchPipeControl class also derives from UserControl and defines a new property named Frequency. The XAML file includes a TouchDial control as well as a bunch of TextBlocks to show the 12 notes of the octave. PitchPipeControl also makes use of another property of TouchDial I haven’t discussed yet: If you set SnapIncrement to a non-zero value in angles, the motion of the dial will not be smooth, but will jump between increments. Because PitchPipeControl can be set for the 12 notes of the octave, the SnapIncrement is set to 30 degrees.

Figure 9 shows the PitchPipe program that combines VolumeControl and PitchPipeControl. You can run PitchPipe at

Figure 9 The PitchPipe Program

The Bonus Program

Earlier in this article I mentioned a control named PianoKey in the context of an example. PianoKey is an actual control, and it is one of several controls in the Piano program you can run at The program is intended to be displayed with your browser maximized. (Or press F11 to make Internet Explorer go into Full Screen mode and get even more room.) A very tiny rendition is shown in Figure 10. The keyboard is divided into overlapping treble and bass parts. The red dots indicate Middle C.

Figure 10 The Piano Program

It is for this program that I wrote TouchManager because the Piano program uses touch in three different ways. I’ve already discussed the blue VolumeControl, which captures the touch point on a TouchDown event and releases capture on TouchUp. The PianoKey controls that make up the keyboards also use TouchManager, but these controls only listen to the TouchEnter and TouchLeave events. You can indeed run your fingers across the keys for glissando effects. The brown rectangles that function as sustain pedals are ordinary Silverlight ToggleButton controls. These are not specifically touch-enabled; instead touch points are converted to mouse events.

The Piano program demonstrates three different ways to use multi-touch. I suspect that there are many, many more.


Charles Petzold  is a longtime contributing editor to MSDN Magazine. His most recent book is “The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine” (Wiley, 2008). Petzold blogs on his Web site

Thanks to the following technical experts for reviewing this article:  Robert Levy and Anson Tsao