December 2009

Volume 24 Number 12

Going Places - Enhancing Windows Touch Applications for Mobile Users

By Gus Class | December 2009

Windows 7 introduces Windows Touch, which enhances touch input on capable hardware and provides a solid platform for building touch applications. Potentially, this means that you can develop remarkably intuitive interfaces that users of all ages and computing abilities can understand with a minimal amount of training or instruction.

The magic behind this functionality is the Windows Touch API. Using this API, you can retrieve information about where a user is touching the screen and about a user’s on-screen gestures. You also have access to real-world physics for user interface elements. Moving an on-screen object becomes as easy as moving an object in the real world. Stretching an object is like stretching a piece of elastic. When users interact with a well-implemented touch application, they feel as though they’re interacting with the technology of the future, or even better, they don’t notice that they’re using an application at all. They don’t have to use a mouse, a stylus or shortcut keys or precisely select menu items to get at the application’s core functionality.

Applications tailored to mobile use should incorporate specific requirements to ensure that the experience is well suited to the user’s environment. A poorly implemented touch application can completely defeat the purpose of using Windows Touch. The Windows Touch User Experience guidelines (go.microsoft.com/fwlink/?LinkId=156610) highlight ways that developers can improve the experience for users on the go. These guidelines cover various scenarios relevant to mobile application developers and make it easier to avoid potential pitfalls of Windows Touch development.

If you take away only one thing from this article, remember that when creating an application that targets mobile users, you need to consider aspects that are specific to your type of application. For instance, if your application uses Windows controls, be sure they are of adequate size and have sufficient spacing so that users can touch them easily. If you are creating an application that can take advantage of flicks, be sure that the flick actions are properly handled.

First Things First

In this article, I’ll take a sample touch application and enhance it for mobile applications. I assume that you have some knowledge of COM and Windows Touch, and have Windows Touch-capable hardware. For a primer on Windows Touch, go to go.microsoft.com/fwlink/?LinkId=156612 or read Yochay Kiriaty’s article at msdn.microsoft.com/magazine/ee336016.aspx.

The example is on the MSDN Code Gallery at code.msdn.microsoft.com/ windowstouchmanip. The Downloads tab contains two .zip files, the first without mobile enhancements and the second with them. Download the file named Multiple Manipulators.zip, expand it and compile the project.

To be honest, using the example is at times like trying to thread a needle while wearing mittens: the functionality is diminished to a point  that frustrates users. For example, if you try to select overlapping objects in an overlapping region, you will select and move both objects. You also can resize an object so that it’s so small that you can’t resize it again. I’ll show you how to fix these problems and make other changes that improve the user experience in the areas of general usability, object selection and the use of a natural user interface. Remember that considerations you make for each mobile application depend on how users will interact with it. The issues I cover here should be used as guidelines only for this specific application.

General Usability

When a user is manipulating graphical objects in a mobile application, he must be able to perform tasks without the use of a keyboard and mouse. Also, when a mobile user is using high DPI settings or is connected to multiple screens, the application must behave consistently. (High DPI requirements are discussed in detail at go.microsoft.com/fwlink/?LinkId=153387.)

For the sample application, Windows Touch implicitly addresses the issue of obtaining input from the user without a mouse and keyboard.  Users can use touch input to perform actions such as object translation, scaling and so on. A related consideration is supporting mouse and keyboard input in an application designed for touch input so that a user can drive the manipulation processor using any input, including mouse input. Figure 1 shows how you could let a user simulate touch input through mouse input by adding some utility functions to the sample application’s Drawable class. You also have to add handlers to WndProc to hook mouse input to the input processor (see Figure 2).

Figure 1 Utility Functions for Simulating Touch Input with the Mouse

VOID Drawable::FillInputData(TOUCHINPUT* inData, DWORD cursor, DWORD eType, DWORD time, int x, int y)
{
    inData->dwID = cursor;
    inData->dwFlags = eType;
    inData->dwTime = time;
    inData->x = x;
    inData->y = y;
}

void Drawable::ProcessMouseData(HWND hWnd, UINT msg, WPARAM wParam, LPARAM
    lParam){
    TOUCHINPUT tInput;
    if (this->getCursorID() == MOUSE_CURSOR_ID){          
        switch (msg){
            case WM_LBUTTONDOWN:
                FillInputData(&tInput, MOUSE_CURSOR_ID, TOUCHEVENTF_DOWN, (DWORD)GetMessageTime(),LOWORD(lParam) * 100,HIWORD(lParam) * 100);
                ProcessInputs(hWnd, 1, &tInput, 0);
                break;

            case WM_MOUSEMOVE:
                if(LOWORD(wParam) == MK_LBUTTON)
                {
                    FillInputData(&tInput, MOUSE_CURSOR_ID, TOUCHEVENTF_MOVE, (DWORD)GetMessageTime(),LOWORD(lParam) * 100, HIWORD(lParam) * 100);
                    ProcessInputs(hWnd, 1, &tInput, 0);
                }          
                break;

            case WM_LBUTTONUP:
                FillInputData(&tInput, MOUSE_CURSOR_ID, TOUCHEVENTF_UP, (DWORD)GetMessageTime(),LOWORD(lParam) * 100, HIWORD(lParam) * 100);            
                ProcessInputs(hWnd, 1, &tInput, 0);
                setCursorID(-1);
                break;
            default:
                break;
        }   
    }     
}

Figure 2 Changes from WndProc

case WM_LBUTTONDOWN:
    case WM_MOUSEMOVE:   
    case WM_LBUTTONUP:
        for (i=0; i<drawables; i++){
          // contact start
          if (message == WM_LBUTTONDOWN && draw[i]->IsContacted(LOWORD(lParam), HIWORD(lParam), MOUSE_CURSOR_ID)){
              draw[i]->setCursorID(MOUSE_CURSOR_ID);
          }
          // contact end
          if (message == WM_LBUTTONUP && draw[i]->getCursorID() == MOUSE_CURSOR_ID){
            draw[i]->setCursorID(-1);      
          }
          draw[i]->ProcessMouseData(hWnd, message, wParam, lParam);
        }        
        InvalidateRect(hWnd, NULL, false);
        break;

To address high DPI requirements, you can add a project manifest to the build settings to make the application aware of the DPI settings. You do this so that the coordinate space is correct when you are working at various DPI levels. (If you are interested in seeing how the application behaves after you have changed the DPI level, right-click your desktop, click Personalize and then change your DPI level in the Display control panel.)

The following XML shows how this manifest could be defined to make your application compatible with high DPI settings:

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" 
   xmlns:asmv3="urn:schemas-microsoft-com:asm.v3" >
  <asmv3:application>
    <asmv3:windowsSettings xmlns=
"http://schemas.microsoft.com/SMI/2005/WindowsSettings">
      <dpiAware>true</dpiAware>
    </asmv3:windowsSettings>
  </asmv3:application>
</assembly>

Once the project manifest is added to the project’s properties, the application correctly sends touch input information to the manipulation processor regardless of the user’s DPI settings. You can also use the ScreenToClient method (see go.microsoft.com/fwlink/?LinkID=153391 for more information) to ensure that the coordinate space is set to the application coordinates rather than to the screen coordinates. Figure 3 shows the changes to the ProcessInputs member function of the Drawable class that convert the screen points to client points. Now when the user connects an external monitor to a Windows Touch–enabled PC, the coordinate space of your application will remain consistent and DPI aware.

Figure 3 Converting Screen Points to Client Points

POINT ptInput;
void Drawable::ProcessInputs(HWND hWnd, UINT cInputs, 
     PTOUCHINPUT pInputs, LPARAM lParam){
  for (int i=0; i < static_cast<INT>(cInputs); i++){
...
      ScreenToClient(hWnd, &ptInput);
                
      if (ti.dwFlags & TOUCHEVENTF_DOWN){
        if (IsContacted( ptInput.x, ptInput.y, ti.dwID) ){
          pManip->ProcessDownWithTime(ti.dwID, static_cast<FLOAT>
(ptInput.x), static_cast<FLOAT>( ptInput.y), ti.dwTime);                  
          setCursorID(ti.dwID);                  
            
          if (!CloseTouchInputHandle((HTOUCHINPUT)lParam)) {
            // Error handling                
          }
        }
      }
      if (pInputs[i].dwFlags & TOUCHEVENTF_MOVE){
        pManip->ProcessMoveWithTime(ti.dwID, static_cast<FLOAT>
(ptInput.x), static_cast<FLOAT>( ptInput.y), ti.dwTime);                  
      }
      if (pInputs[i].dwFlags & TOUCHEVENTF_UP){
        pManip->ProcessUpWithTime(ti.dwID, static_cast<FLOAT>
(ptInput.x), static_cast<FLOAT>( ptInput.y), ti.dwTime);
        setCursorID(-1);
      }      
      // If you handled the message and don’t want anything else done 
      // with it, you can close it
   
  }
}

Object Selection

To ensure that object selection functions as the user expects, the user must be able to select overlapping objects in a natural and intuitive manner, and the user must be able to select and easily transform objects on screens on smaller form factors or screens with limited touch input resolution.

As the application currently operates, when a user selects an overlapping object, the application sends touch data to all the objects that are under the point where the user touches the window. To modify the application to stop handling touch input after the first touched object is encountered, you need to close the touch input handle when an object is selected. Figure 4 shows how you can update the touch input handler to stop handling the touch message after the first object is contacted. 

Figure 4 Updating the Touch Input Handler

POINT ptInput;
void Drawable::ProcessInputs(HWND hWnd, UINT cInputs, 
     PTOUCHINPUT pInputs, LPARAM lParam){
  BOOL fContinue = TRUE;
  for (int i=0; i < static_cast<INT>(cInputs) && fContinue; i++){
...                
      if (ti.dwFlags & TOUCHEVENTF_DOWN){
        if (IsContacted( ptInput.x, ptInput.y, ti.dwID) ){
          pManip->ProcessDownWithTime(ti.dwID, static_cast<FLOAT>
(ptInput.x), static_cast<FLOAT>(ptInput.y), ti.dwTime);                  
          setCursorID(ti.dwID);                  
            
          fContinue = FALSE;
        }
      }
...
  }
  CloseTouchInputHandle((HTOUCHINPUT)lParam);

}

After you implement this change, when a touched object is contacted, touch data stops getting sent to other objects in the array. To change the application so that only the first object under mouse input receives touch input, you can break out of the switch in the input processing statement for mouse down input, which short-circuits the logic for mouse input. Figure 5 demonstrates the changes to the switch statement in the mouse input handler.

Figure 5 Changing the Switch Statement in the Mouse Input Handler

case WM_LBUTTONDOWN:
        for (i=0; i<drawables; i++){
          if (draw[i]->IsContacted(LOWORD(lParam), HIWORD(lParam), MOUSE_CURSOR_ID)){
              draw[i]->setCursorID(MOUSE_CURSOR_ID);
              draw[i]->ProcessMouseData(hWnd, message, wParam, lParam);   
              break;
          }
        }
...

Next, you should change your application to ensure that when a user resizes objects, the objects will not become so small that the user cannot select or resize them again. To address this, you can use settings in the Manipulations API to restrict how small an object can be sized. The following changes are made to the manipulation processor utility of the Drawable object:

void Drawable::SetUpManipulator(void){
  pManip->put_MinimumScaleRotateRadius(4000.0f);  
}

Now when you scale an object, scale values less than 4,000 centipixels are ignored by the application. Each Drawable object can have unique constraints set in the SetUpManipulator method to ensure that the object can be manipulated in an appropriate manner.

Natural User Interface

In an application designed to have a natural look and feel, a user should be able to perform simultaneous manipulations on multiple objects. Objects should have simple physics when they’re moved across the screen, similar to how they behave in the real world, and the user should not be able to manipulate objects off screen.

By design, applications that use the Manipulations API should support simultaneous manipulation of objects. Because this example uses the Manipulations API, simultaneous manipulations are enabled automatically. When you use the Gestures API for Windows Touch support, simultaneous manipulation of objects is not possible and compound gestures such as pan+zoom and zoom+rotate aren’t either. For this reason, you should use the Manipulations API when you are designing a Windows Touch application that targets mobile PCs.

The Windows Touch API includes the IInertiaProcessor interface to enable support for simple physics (inertia). IInertiaProcessor uses some of the same methods as the IManipulationProcessor interface to simplify adding support for inertia to applications that are already using manipulations. To enable support for inertia, you need to extend the existing event sink for the manipulation processor, add a reference to an IInertiaProcessor interface instance on the Drawable object, connect event data from the event sink to the IInertiaProcessor object and use a timer to trigger the IInertiaProcessor interface to trigger manipulation events for inertia. Let’s look at each operation in more detail.

First you need to update the event sink to enable support for sending data to an IInertiaProcessor interface. The following members and constructor definitions are added to the event sink implementation header:

class CManipulationEventSink : _IManipulationEvents
{
public:
    CManipulationEventSink(IInertiaProcessor *inert, Drawable* d);
    CManipulationEventSink(IManipulationProcessor *manip, IInertiaProcessor *inert, Drawable* d);

...
protected:
    IInertiaProcessor*      m_pInert;
    BOOL fExtrapolating;

You also add a member and an access method to the event sink for setting a HWND that is used for timers, as shown here:

public:

void SetWindow(HWND hWnd) {m_hWnd = hWnd;}
...
private:
...
HWND m_hWnd;

Next, change the constructor that takes an IManipulationProcessor interface to accept an IInertiaProcessor interface, and add a constructor that accepts only an IInertiaProcessor interface. The constructor that takes an IManipulationProcessor interface uses the reference to the IInertiaProcessor interface to trigger inertia from the ManipulationCompleted event. The constructor that takes only an IInertiaProcessor interface handles events that are for inertia. Figure 6 shows the implementations of these constructors.

Figure 6 Implementations of IManipulationProcessor and IInertiaProcesor Constructors

CManipulationEventSink::CManipulationEventSink(IManipulationProcessor *manip, IInertiaProcessor *inert, Drawable* d){
    drawable = d;
    // Yes, we are extrapolating inertia in this case
    fExtrapolating = false;

    //Set initial ref count to 1
    m_cRefCount = 1;

    m_pManip = NULL;
    m_pInert = inert;    

    m_cStartedEventCount = 0;
    m_cDeltaEventCount = 0;
    m_cCompletedEventCount = 0;

    HRESULT hr = S_OK;

    //Get the container with the connection points
    IConnectionPointContainer* spConnectionContainer;
    
    hr = manip->QueryInterface(
      IID_IConnectionPointContainer, 
      (LPVOID*) &spConnectionContainer
      );

    if (spConnectionContainer == NULL){
        // Something went wrong, try to gracefully quit        
    }

    //Get a connection point
    hr = spConnectionContainer->FindConnectionPoint
(__uuidof(_IManipulationEvents), &m_pConnPoint);

    if (m_pConnPoint == NULL){
        // Something went wrong, try to gracefully quit
    }

    DWORD dwCookie;

    //Advise
    hr = m_pConnPoint->Advise(this, &dwCookie);
}
CManipulationEventSink::CManipulationEventSink(IInertiaProcessor *inert, Drawable* d)
{
    drawable = d;
    // Yes, we are extrapolating inertia in this case
    fExtrapolating = true;

    //Set initial ref count to 1
    m_cRefCount = 1;

    m_pManip = NULL;
    m_pInert = inert;    

    m_cStartedEventCount = 0;
    m_cDeltaEventCount = 0;
    m_cCompletedEventCount = 0;

    HRESULT hr = S_OK;

    //Get the container with the connection points
    IConnectionPointContainer* spConnectionContainer;
    
    hr = inert->QueryInterface(
      IID_IConnectionPointContainer, 
      (LPVOID*) &spConnectionContainer
      );

    if (spConnectionContainer == NULL){
        // Something went wrong, try to gracefully quit        
    }

    //Get a connection point
    hr = spConnectionContainer->FindConnectionPoint
(__uuidof(_IManipulationEvents), &m_pConnPoint);
    if (m_pConnPoint == NULL){
        // Something went wrong, try to gracefully quit
    }

    DWORD dwCookie;

    //Advise
    hr = m_pConnPoint->Advise(this, &dwCookie);

Next, you update the Drawable class to enable support for inertia. The forward definition shown in Figure 7 should be added as well as a member variable, pInert.

Figure 7 Updating the Drawable Class

interface IInertiaProcessor;
public:
...
    // Inertia Processor Initiation
    virtual void SetUpInertia(void);

...
protected:

    HWND m_hWnd;
    
    IManipulationProcessor* pManip;
    IInertiaProcessor*      pInert;
    CManipulationEventSink* pEventSink;

The following code shows the simplest implementation for the SetUpInertia method. This method finishes any processing, resets the inertia processor and then sets any configuration settings:

void Drawable::SetUpInertia(void){
    // Complete any previous processing
    pInert->Complete();

    pInert->put_InitialOriginX(originX*100);
    pInert->put_InitialOriginY(originY*100);
       
    // Configure the inertia processor
    pInert->put_DesiredDeceleration(.1f);  
}

After you update the Drawable class, change the Drawable constructor to incorporate the new event sink constructors, as shown in Figure 8.

Figure 8 Incorporating the New Event Sink Constructors

Drawable::Drawable(HWND hWnd){
. . 
  
    // Initialize manipulators  
    HRESULT hr = CoCreateInstance(CLSID_ManipulationProcessor,
          NULL,
          CLSCTX_INPROC_SERVER,
          IID_IUnknown,
          (VOID**)(&pManip)
    );

    // Initialize inertia processor
    hr = CoCreateInstance(CLSID_InertiaProcessor,
          NULL,
          CLSCTX_INPROC_SERVER,
          IID_IUnknown,
          (VOID**)(&pInert)
    );

    //TODO: test HR 
    pEventSink = new CManipulationEventSink(pManip,pInert, this);
    pInertSink = new CManipulationEventSink(pInert, this);
    pEventSink->SetWindow(hWnd);
    pInertSink->SetWindow(hWnd);

    SetUpManipulator();
    SetUpInertia();
    m_hWnd = hWnd;
}

And now add the following timer message handler to the main program:

case WM_TIMER:
        // wParam indicates the timer ID
        for (int i=0; i<drawables; i++){
            if (wParam == draw[i]->GetIndex() ){
                BOOL b;       
                draw[i]->ProcessInertia(&b);        
            }
        }
    break;

Once you have your timer handler and your timer is set up, you need to trigger it from the completed message in the event where there is no inertia. Figure 9 shows changes to the completed event that start the timer when the user is finished manipulating an object and stop the timer once inertia is complete.

Figure 9 Changes to the Completed Event

HRESULT STDMETHODCALLTYPE CManipulationEventSink::ManipulationCompleted( 
    /* [in] */ FLOAT x,
    /* [in] */ FLOAT y,
    /* [in] */ FLOAT cumulativeTranslationX,
    /* [in] */ FLOAT cumulativeTranslationY,
    /* [in] */ FLOAT cumulativeScale,
    /* [in] */ FLOAT cumulativeExpansion,
    /* [in] */ FLOAT cumulativeRotation)
{
    m_cCompletedEventCount ++;

    m_fX = x;
    m_fY = y;


    if (m_hWnd){
        if (fExtrapolating){
            //Inertia Complete, stop the timer used for processing
            KillTimer(m_hWnd,drawable->GetIndex());
        }else{ 
            // Setup velocities for inertia processor
            float vX, vY, vA = 0.0f;
            m_pManip->GetVelocityX(&vX);
            m_pManip->GetVelocityY(&vY);
            m_pManip->GetAngularVelocity(&vA);

            drawable->SetUpInertia();

            // Set up the touch coordinate data
            m_pInert->put_InitialVelocityX(vX / 100);
            m_pInert->put_InitialVelocityY(vY / 100);        
                          
            // Start a timer
            SetTimer(m_hWnd, drawable->GetIndex(), 50, 0);   
    
            // Reset sets the initial timestamp
            pInert->Reset();     
        }
    }
}

Notice that reducing the timer interval, the third parameter for SetTimer, results in smoother animation but triggers more update events, potentially causing performance degradation depending on what operations the event handlers perform. For example, changing this value to 5 results in very smooth animation, but the window is updated more frequently because of additional calls to CManipulationEventSink::ManipulationDelta.

Now you can build and run your application, but without additional changes, manipulated objects will drift off screen. To prevent objects from being manipulated off screen, configure the IInertiaProcessor interface to use elastic bounds. Figure 10 shows the changes that should be made to the SetUpInertia method for the Drawable object to initialize the screen boundaries.

Figure 10 Initializing Screen Boundaries

void Drawable::SetUpInertia(void){
(...)
            
    // Reset sets the  initial timestamp       
    pInert->put_DesiredDeceleration(.1f);

    RECT rect;
    GetClientRect(m_hWnd, &rect);        

    int width = rect.right - rect.left;
    int height = rect.bottom - rect.top;

    int wMargin = width  * .1;
    int hMargin = height * .1;

    pInert->put_BoundaryLeft(rect.left * 100);
    pInert->put_BoundaryTop(rect.top * 100);
    pInert->put_BoundaryRight(rect.right * 100);
    pInert->put_BoundaryBottom(rect.bottom * 100);

    pInert->put_ElasticMarginTop((rect.top - hMargin) * 100);
    pInert->put_ElasticMarginLeft((rect.left + wMargin) * 100);
    pInert->put_ElasticMarginRight((rect.right - wMargin) * 100);
    pInert->put_ElasticMarginBottom((rect.bottom + hMargin) * 100);

...
}

Looking Forward

Using the Windows Touch API is an effective way to add value to existing applications and is a great way to make your applications stand out. Taking extra time to address the context that your application will be used in allows you to make the most of the Windows Touch API. If you take into consideration the mobility and usability requirements of your application, the application becomes more intuitive, and users need less time to discover its functionality. (Additional resources, including the complete documentation reference for Windows Touch, can be found on MSDN at msdn.microsoft.com/library/dd562197(VS.85).aspx).

With the release of Windows Presentation Framework (WPF) and .NET 4, Microsoft will support managed development using controls that enable multiple contact points. If you are a developer working with managed code looking to enhance your application with multiple-input support, this release is worth checking out. Currently, examples of managed Windows Touch wrappers for C# are included in the Windows SDK.     


Gus “gclassy” Class is a programming writer/evangelist for Microsoft, where he has worked on Windows Touch, Tablet PC and Microsoft’s DRMsystems. He discusses developer gotchas and offers programming examples on his blog at gclassy.com.

Thanks to the following technical expert for reviewing this article: Xiao Tu

Send your questions and comments for Gus  to goplaces@microsoft.com.