September 2010

Volume 25 Number 09

UI Frontiers - Touch and Response

By Charles Petzold | September 2010

Charles PetzoldProgramming is an engineering discipline rather than a science or a branch of mathematics, so rarely does there exist a single correct solution to a problem. Varieties and variations are the norm, and often it’s illuminating to explore these alternatives rather than focus on one particular approach.

In my article “Multi-Touch Manipulation Events in WPF” in the August issue of MSDN Magazine, I began exploring the exciting multi-touch support introduced into version 4 of the Windows Presentation Foundation (WPF). The Manipulation events serve primarily to consolidate multi-touch input into useful geometric transforms, and to assist in implementing inertia.

In that article, I showed two related approaches to handling Manipulation events on a collection of Image elements. In both cases, the actual events were processed by the Window class. One program defined handlers for the Manipulation events of the manipulated elements. The other approach showed how to override the OnManipulation methods to get the same events routed through the visual tree.

The Custom Class Approach

A third approach also makes sense: A custom class can be defined for the manipulated elements that overrides its own OnManipulation methods rather than leaving this job to a container element. The advantage of this approach is that you can make the custom class a little more attractive by decorating it with a Border or other element; these decorations can also be used to provide visual feedback when the user touches a manipulable element.

When veteran WPF programmers determine they need to make visual changes to a control based on events, they probably think of EventTrigger, but WPF programmers need to start transitioning to the Visual State Manager. Even when deriving from UserControl (the strategy I’ll be using), it’s fairly easy to implement.

An application using the Manipulation events should probably base visual feedback on those same events rather than the low-level TouchDown and TouchUp events. When using the Manipulation events, you’ll want to begin the visual feedback with either the ManipulationStarting or ManipulationStarted event. (It really doesn’t make a difference which you choose for this job.)

However, when experimenting with this feedback, one of the first things you’ll discover is that the ManipulationStarting and ManipulationStarted events are not fired when an element is first touched, but only when it starts moving. This behavior is a holdover from the stylus interface, and you’ll want to change it by setting the following attached property on the manipulated element:

Stylus.IsPressAndHoldEnabled="False"

Now, the ManipulationStarting and ManipulationStarted events are fired when an element is first touched. You’ll want to turn off the visual feedback with either the ManipulationInertiaStarting or Manipulation­Completed event, depending on whether you want the feedback to end when the user’s finger lifts from the screen or after the element has stopped moving due to inertia. If you’re not using inertia (as I won’t be in this article), it doesn’t matter which event you use.

The downloadable code for this article is in a single Visual Studio solution named TouchAndResponseDemos with two projects. The first project is named FeedbackAndSmoothZ, which includes a custom UserControl derivative named ManipulablePictureFrame that implements the manipulation logic.

ManipulablePictureFrame defines a single property of type Child and uses its static constructor to redefine defaults for three properties: HorizontalAlignment, VerticalAlignment and the all-important IsManipulationEnabled. The instance constructor calls InitializeComponent (as usual), but then sets the control’s RenderTransform to a MatrixTransform if that’s not the case.

During the OnManipulationStarting event, the Manipulable­PictureFrame class calls:

VisualStateManager.GoToElementState(this, "Touched", false);

and during the OnManipulationCompleted event, it calls:

VisualStateManager.GoToElementState(this, "Untouched", false);

This is my code file’s sole contribution to implementing visual states. The code performing the actual manipulations will be familiar from the code in last month’s column—with two significant changes:

  • In the OnManipulationStarting method, the ManipulationContainer is set to the element’s parent.
  • The OnManipulationDelta method is just a little simpler because the element being manipulated is the Manipulable­PictureFrame object itself.

Figure 1shows the complete ManipulablePictureFrame.xaml file.

Figure 1The ManipulablePictureFrame.xaml File  

<UserControl x:Class="FeedbackAndSmoothZ.ManipulablePictureFrame"
             xmlns=
       "http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             Stylus.IsPressAndHoldEnabled="False"
             Name="this">
    
    <VisualStateManager.VisualStateGroups>
        <VisualStateGroup x:Name="TouchStates">
            <VisualState x:Name="Touched">
                <Storyboard>
                    <DoubleAnimation Storyboard.TargetName="maskBorder"
                                     Storyboard.TargetProperty="Opacity"
                                     To="0.33" Duration="0:0:0.25" />
                    
                    <DoubleAnimation Storyboard.TargetName="dropShadow"
                                     Storyboard.TargetProperty=          
                  "ShadowDepth"
                                     To="20" Duration="0:0:0.25" />
                </Storyboard>
            </VisualState>
            <VisualState x:Name="Untouched">
                <Storyboard>
                    <DoubleAnimation Storyboard.TargetName="maskBorder"
                                     Storyboard.TargetProperty="Opacity"
                                     To="0" Duration="0:0:0.1" />
                    <DoubleAnimation Storyboard.TargetName="dropShadow"
                                     Storyboard.TargetProperty=     
                      "ShadowDepth"
                                     To="5" Duration="0:0:0.1" />
                </Storyboard>
            </VisualState>
        </VisualStateGroup>
    </VisualStateManager.VisualStateGroups>
    
    <Grid>
        <Grid.Effect>
            <DropShadowEffect x:Name="dropShadow" />
        </Grid.Effect>
        
        <!-- Holds the photo (or other element) -->
        <Border x:Name="border" 
                Margin="24" />
        
        <!-- Provides visual feedback -->
        <Border x:Name="maskBorder" 
                Margin="24" 
                Background="White" 
                Opacity="0" />
        
        <!-- Draws the frame -->
        <Rectangle Stroke="{Binding ElementName=this, Path=Foreground}" 
                   StrokeThickness="24" 
                   StrokeDashArray="0 0.9" 
                   StrokeDashCap="Round" 
                   RadiusX="24" 
                   RadiusY="24" />
        
        <Rectangle Stroke="{Binding ElementName=this, Path=Foreground}" 
                   StrokeThickness="8" 
                   Margin="16" 
                   RadiusX="24" 
                   RadiusY="24" />
    </Grid>
</UserControl>

The Border named “border” is used to host the child of the ManipulablePictureFrame class. This will probably be an Image element, but it doesn’t have to be. The two Rectangle elements draw a type of “scalloped” frame around the Border, and the second Border is used for visual feedback.

While an element is being moved, the animations in ManipulablePictureFrame.xaml “lighten” the picture a bit—actually, it’s more of a “washing out” effect—and increase the drop shadow, as shown in Figure 2.

Figure 2 A Highlighted Element in the FeedbackAndSmoothZ Program

Figure 2 A Highlighted Element in the FeedbackAndSmoothZ Program

Pretty much any kind of simple highlighting can provide visual feedback during touch events. However, if you’re working with small elements that can be touched and manipulated, you’ll want to make the elements larger when they’re touched so they won’t be entirely hidden by the user’s finger. (On the other hand, you don’t want to make an element larger for visual feedback if you’re also allowing the user to resize the element. It’s very disconcerting to manipulate an element into a desired size and then have it shrink a little when you lift your fingers from the screen!)

You’ll notice that as you make the images smaller and larger, the frame shrinks or expands accordingly. Is this correct behavior? Perhaps. Perhaps not. I’ll show an alternative to this behavior toward the end of this article.

Smooth Z Transitions

In the programs I showed last month, touching a photo would cause it to jump to the foreground. This was just about the simplest approach I could think of and required setting new Panel.ZIndex attached properties for all the Image elements.

A brief refresher: Normally when children of a Panel overlap, they are arranged from background to foreground based on their position in the Children collection of the Panel. However, the Panel class defines an attached property named ZIndex that effectively supersedes the child index. (The name alludes to the Z-axis ortho­gonal to the conventional XY plane of the screen, which conceptually comes out of the screen.) Elements with a lower ZIndex value are in the background; higher ZIndex values put an element in the foreground. If two or more overlapping elements have the same ZIndex setting (which is the case by default), their child indices in the Children collection are used instead to determine which is on top of the other.

In the earlier programs, I used the following code to set new Panel.ZIndex values, where the variable element is the element being touched and pnl (of type Panel) is the parent of that element and its siblings:

for (int i = 0; i < pnl.Children.Count; i++)
     Panel.SetZIndex(pnl.Children[i],
        pnl.Children[i] == element ? pnl.Children.Count : i);

This code ensures that the touched element gets the highest ZIndex and appears in the foreground.

Unfortunately, the touched element jumps to the foreground in a sudden, rather unnatural, movement. Sometimes other elements switch places at the same time. (If you have four overlapping elements and touch the first, it gets a ZIndex of 4 and the others have ZIndex values of 1, 2 and 3. Now if you touch the fourth, the first goes back to a ZIndex of 0 and will suddenly go behind all the others.)

My goal was to avoid the sudden snapping of elements to the foreground and background. I wanted a smoother effect that mimicked the process of slipping a photo from underneath a pile and then slipping it back on top. In my mind, I started thinking of these transitions as “smooth Z.” Nothing would jump to the foreground or background, but as you moved an element around, eventually it would find itself on top of all the others. (An alternative approach is implemented in the ScatterView control available for download from CodePlex at scatterview.codeplex.com/releases/view/24159. ScatterView is certainly preferable when dealing with large numbers of items.)

In implementing this algorithm, I set a few criteria for myself. First, I didn’t want to maintain state information from one move event to the next. In other words, I didn’t want to analyze whether the manipulated element was intersecting another element previously but was no longer. Second, I didn’t want to perform memory allocations during the ManipulationDelta events because there might be many of them. Third, to avoid too much complexity, I wanted to restrict changes of the relative ZIndex to only the manipulated element.

The complete algorithm is shown in Figure 3. Crucial to the approach is determining whether two sibling elements visually intersect. There are several ways to go about this, but the code I used (in the AreElementsIntersecting method) seemed the simplest. It reuses two RectangleGeometry objects stored as fields.

Figure 3 The Smooth Z Algorithm

// BumpUpZIndex with reusable SortedDictionary object
SortedDictionary<int, UIElement> childrenByZIndex = new 
SortedDictionary<int, UIElement>();
void BumpUpZIndex(FrameworkElement touchedElement, UIElementCollection siblings)
{
  // Make sure everybody has a unique even ZIndex
  for (int childIndex = 0; childIndex < siblings.Count; childIndex++)
  {
        UIElement child = siblings[childIndex];
        int zIndex = Panel.GetZIndex(child);
        Panel.SetZIndex(child, 2 * (zIndex * siblings.Count + childIndex));
  }
  int zIndexNew = Panel.GetZIndex(touchedElement);
  int zIndexCantGoBeyond = Int32.MaxValue;
  // Don't want to jump ahead of any intersecting elements that are on top
  foreach (UIElement child in siblings)
        if (child != touchedElement && 
            AreElementsIntersecting(touchedElement, (FrameworkElement)child))
        {
            int zIndexChild = Panel.GetZIndex(child);
            if (zIndexChild > Panel.GetZIndex(touchedElement))
                zIndexCantGoBeyond = Math.Min(zIndexCantGoBeyond, zIndexChild);
        }
  // But want to be in front of non-intersecting elements
  foreach (UIElement child in siblings)
        if (child != touchedElement && 
            !AreElementsIntersecting(touchedElement, (FrameworkElement)child))
        {
            // This ZIndex is odd, hence unique
            int zIndexNextHigher = 1 + Panel.GetZIndex(child);
            if (zIndexNextHigher < zIndexCantGoBeyond)
                zIndexNew = Math.Max(zIndexNew, zIndexNextHigher);
        }
  // Now give all elements indices from 0 to (siblings.Count - 1)
  Panel.SetZIndex(touchedElement, zIndexNew);
  childrenByZIndex.Clear();
  int index = 0;
  foreach (UIElement child in siblings)
        childrenByZIndex.Add(Panel.GetZIndex(child), child);
  foreach (UIElement child in childrenByZIndex.Values)
        Panel.SetZIndex(child, index++);
    }
// Test if elements are intersecting with reusable //       
RectangleGeometry objects
RectangleGeometry rectGeo1 = new RectangleGeometry();
RectangleGeometry rectGeo2 = new RectangleGeometry();
bool AreElementsIntersecting(FrameworkElement element1, FrameworkElement         
 element2)
{
 rectGeo1.Rect = new 
  Rect(new Size(element1.ActualWidth, element1.ActualHeight));
 rectGeo1.Transform = element1.RenderTransform;
 rectGeo2.Rect = new 
  Rect(new Size(element2.ActualWidth, element2.ActualHeight));
 rectGeo2.Transform = element2.RenderTransform;
 return rectGeo1.FillContainsWithDetail(rectGeo2) != IntersectionDetail.Empty;
}

The BumpUpZIndex method performs the bulk of the work. It begins by making sure all the siblings have unique ZIndex values, and that all these values are even numbers. The new ZIndex for the manipulated element can’t be higher than any ZIndex value of any element that’s intersecting and currently on top of the manipulated element. Taking this limit into account, the code attempts to assign a new ZIndex that’s higher than the ZIndex values of all non-intersecting elements.

The code I’ve discussed so far will normally have the effect of progressively increasing ZIndex values without limit, eventually exceeding the maximum positive integer value and becoming negative. This situation is avoided using a SortedDictionary. All the siblings are put into the dictionary with their ZIndex values as keys. Then the elements can be given new ZIndex values based on their indices in the dictionary.

The Smooth Z algorithm has a quirk or two. If the manipulated element is intersecting element A but not element B, then it can’t be slipped on top of B if B has a higher ZIndex than A. Also, there’s been no special accommodation for manipulating two or more
elements at the same time.

Manipulation Without Transforms

In all the examples I’ve shown so far, I’ve used information delivered with the ManipulationDelta event to alter the RenderTransform of the manipulated element. That’s not the only option. In fact, if you don’t need rotation, you can implement multi-touch manipulation without any transforms at all.

This “no transform” approach involves using a Canvas as a container for the manipulated elements. You can then move the elements on the Canvas by setting the Canvas.Left and Canvas.Top attached properties. Changing the size of the elements requires manipulating the Height and Width properties, either with the same percentage Scale values used previously or with the absolute Expansion values.

One distinct advantage of this approach is that you can decorate the manipulated elements with a border that won’t itself become larger and smaller as you change the size of the element.

This technique is demonstrated in the NoTransformManipulation project, which includes a UserControl derivative named NoTransformPictureFrame that implements the manipulation logic.

The picture frame in this new class isn’t nearly as fancy as the one in ManipulablePictureFrame. The earlier picture frame used a dotted line for a scalloped effect. If you make such a frame larger to accommodate a larger child but without applying a transform, the line thickness will remain the same and the number of dots in the dotted line will increase! This looks very peculiar and is probably too distracting for a real-life program. The picture frame in the new file is just a simple Border with rounded corners.

In the MainPage.xaml file in the NoTransformManipulation project, five NoTransformPictureFrame objects are assembled on a Canvas, all containing Image elements and all with unique Canvas.Left and Canvas.Top attached properties. Also, I’ve given each NoTransformPictureFrame a Width of 200 but no Height. When resizing Image elements, it’s usually best to specify just one dimension and let the element choose its other dimension to maintain the proper aspect ratio.

The NoTransformPictureFrame.xaml.cs file is similar in structure to the ManipulablePictureFrame code except that no transform code is required. The OnManipulationDelta override adjusts the Canvas.Left and Canvas.Top attached properties and uses the Expansion values to increase the Width property of the element. Just a little bit of trickiness is required when scaling is in effect, because the translation factors need to be adjusted to accommodate the center of scaling.

A change was also required in the AreElementsIntersecting method that plays a crucial role in the smooth Z transitions. The earlier method constructed two RectangleGeometry objects reflecting the untransformed dimensions of the two elements and then applied the two RenderTransform settings. The replacement method is shown in Figure 4. These RectangleGeometry objects are based solely on the actual size of the element offset by the Canvas.Left and Canvas.Top attached properties.

Figure 4 Alternative Smooth Z Logic for Manipulation Without Transforms

bool AreElementsIntersecting(FrameworkElement element1, FrameworkElement element2)
{
    rectGeo1.Rect = new Rect(Canvas.GetLeft(element1), Canvas.GetTop(element1),
      element1.ActualWidth, element1.ActualHeight);
    rectGeo2.Rect = new Rect(Canvas.GetLeft(element2), Canvas.GetTop(element2),
      element2.ActualWidth, element2.ActualHeight);
    return rectGeo1.FillContainsWithDetail(rectGeo2) != IntersectionDetail.Empty;
}

Remaining Issues

As I’ve been discussing the Manipulation events, I’ve been ignoring an important feature, and the elephant in the room has become larger and larger. Th at feature is inertia, which I’ll tackle in the next issue.


Charles Petzold is a longtime contributing editor to MSDN Magazine. He’s currently writing “Programming Windows Phone 7,” which will be published as a free downloadable e-book in the fall of 2010. A preview edition is currently avail­able through his Web site, charlespetzold.com.

Thanks to the following technical experts for reviewing this column: Doug Kramer and Robert Levy