Share via


Exercise 1: Build a Multitouch Application

Task 1 – Create the Win32 Application

  1. Start Visual Studio 2008 SP1
  2. Select a new C++ based Win32 application project:

  3. Compile and run
  4. We are going to use APIs and Macros that belong to Windows 7.
    1. Change the WINVER and _WIN32_WINNT definitions in the targetver.h header file to 0x0601
    #ifndef WINVER //Specifies that the minimum required platform is Windows 7
    #define WINVER 0x0601
    #endif

    #ifndef _WIN32_WINNT //Specifies that the minimum required platform is Win 7
    #define _WIN32_WINNT 0x0601
    #endif
  5. Compile and run.

Task 2 – Test the Existence and Readiness of Multitouch Hardware

  1. The application that we are building requires a touch-enable device. Add the following code, before the call to InitInstance() in the _tWinMain(), to check the hardware touch ability and readiness:BYTE digitizerStatus = (BYTE)GetSystemMetrics(SM_DIGITIZER);
    if ((digitizerStatus & (0x80 + 0x40)) == 0) //Stack Ready + MultiTouch
    {
    MessageBox(0, L"No touch support is currently available", L"Error", MB_OK);
    return 1;
    }

    BYTE nInputs = (BYTE)GetSystemMetrics(SM_MAXIMUMTOUCHES);
    wsprintf(szTitle, L"%s - %d touch inputs", szTitle, nInputs);

  2. As you can see that besides checking for touch availability and readiness we also find out the number of touch inputs that the hardware support.
  3. Compile and run

Task 3 – Add the Drawing Object Source and Header Files to the Project, and Draw the Rectangle

In the Starter folder, you will find two files: DrawingObject.h and DrawingObject.cpp. Copy them to the project folder and use “Add Existing item…” to add them to the project.

  1. Add an #include DrawingObject.h line at the top of mtGesture.cpp file

    #include "DrawingObject.h"

  2. Add a global variable definition at the //Global Variables: section at the top of mtGesture.cpp file:CDrawingObject g_drawingObject;

  3. Add the following lines in to the WndProc, note that WM_already PAINT has been created by the application wizard:case WM_SIZE:
    g_drawingObject.ResetObject(LOWORD(lParam), HIWORD(lParam));
    break;

    case WM_PAINT:
    hdc = BeginPaint(hWnd, &ps);
    g_drawingObject.Paint(hdc);
    EndPaint(hWnd, &ps);
    break;

  4. Compile and run
  5. Chance the size of the Windows to trigger the WM-Paint message sent to the application. A red rectangle should appear in the middle of the Window:

Task 4 – Touch Me Now!

It’s time to begin! By default touch-enabled system provides WM_GESTURE messages to a target Window. This is somewhat similar to mouse and keyboard messages. The system consumes the low level touch input events and calculates the resulting gesture for us. Use the lParam parameter as the handle to extract the gesture information. The GetGestureInfo() API gets the lParam gesture handle and an address of a GESTUREINFO structure variable:

typedef struct tagGESTUREINFO {
UINT cbSize; // size, in bytes, (including variable length Args field)
DWORD dwFlags; // see GF_* flags
DWORD dwID; // gesture ID, see GID_* defines
HWND hwndTarget; // handle to window targeted by this gesture
POINTS ptsLocation; // current location of this gesture
DWORD dwInstanceID; // internally used
DWORD dwSequenceID; // internally used
ULONGLONG ullArguments; // arguments for gestures (8 BYTES)
UINT cbExtraArgs; // size, of extra arguments, that accompany this gesture
} GESTUREINFO, *PGESTUREINFO;
typedef GESTUREINFO const * PCGESTUREINFO;
/*
* Gesture flags - GESTUREINFO.dwFlags
*/
#define GF_BEGIN 0x00000001
#define GF_INERTIA 0x00000002
#define GF_END 0x00000004

/*
* Gesture IDs
*/
#define GID_BEGIN 1
#define GID_END 2
#define GID_ZOOM 3
#define GID_PAN 4
#define GID_ROTATE 5
#define GID_TWOFINGERTAP 6
#define GID_PRESSANDTAP 7
#define GID_ROLLOVER GID_PRESSANDTAP

After consuming the information that was delivered by calling the GetGestureInfo() we must call the CloseGestureInfoHandle() to release the memory block that the system allocated.

Two fields are very important when responding to the gesture message. These are dwFlags and dwID. dwID tells us what gesture the user performed: Zoom, Pan, Rotate, and so forth. dwFlags tells us whether this is the first time it informs us about the gesture, or the last time, or if the user removed the fingers from the screen, but an inertia engine continued to issue the gesture messages. There are simple gestures such as "Two Finger Tap" that the application needs to respond to only once; other gestures are a little bit more complicated because they send many messages during the user operation. For these kinds of gestures (Zoom, Rotate, Translate) we need to respond differently depending upon which of these conditions is in play. For Pan and Zoom, we do nothing for the first gesture in the series. The rotate begin message comes with a rotation angle, so we need to rotate the drawing object. Whenever a gesture is not the first in a series we calculate the difference between the last argument and the new argument and extract the Zoom factor, and translate the range or the relative rotation angle. In this way, we can update the application while users touch and move their fingers on the screen.

  1. Let’s move! Add the following code to the WndProc:case WM_GESTURE:
    return ProcessGestureMessage(hWnd, wParam, lParam);

  2. Add the following function right before the WndProc:LRESULT ProcessGestureMessage(HWND hWnd, WPARAM wParam, LPARAM lParam)
    {
    static POINT lastPoint;
    static ULONGLONG lastArguments;

    GESTUREINFO gi;

    gi.cbSize = sizeof(GESTUREINFO);
    gi.dwFlags = 0;
    gi.ptsLocation.x = 0;
    gi.ptsLocation.y = 0;
    gi.dwID = 0;
    gi.dwInstanceID = 0;
    gi.dwSequenceID = 0;

    BOOL bResult = GetGestureInfo((HGESTUREINFO)lParam, &gi);

    switch(gi.dwID)
    {
    case GID_PAN:
    if ((gi.dwFlags & GF_BEGIN) == 0) //not the first message
    {
    g_drawingObject.Move(gi.ptsLocation.x - lastPoint.x,
    gi.ptsLocation.y - lastPoint.y);
    InvalidateRect(hWnd, NULL, TRUE);
    }
    }

    //Remember last values for delta calculations
    lastPoint.x = gi.ptsLocation.x;
    lastPoint.y = gi.ptsLocation.y;
    lastArguments = gi.ullArguments;
    CloseGestureInfoHandle((HGESTUREINFO)lParam);return 0;
    }

  3. Compile and run
  4. Try to move the object with two fingers; you can see that the object follows your fingers’ movement. There are some facts that you should be aware of:
    1. The touch location comes in screen coordinates. Since we are interested in the delta, we don’t care, but if we need to know the exact location we have to call ScreenToClient() to make the adjustment.
    2. Try to move the object without touching it; instead touch the screen in an empty area of the Window. It moves! We didn’t performed “hit testing” to check if the touch location is inside the object. This is where we really need the location in window coordinates.
  5. Complete the application and respond to all gesture ids:LRESULT ProcessGestureMessage(HWND hWnd, WPARAM wParam, LPARAM lParam)
    {
    static POINT lastPoint;
    static ULONGLONG lastArguments;

    GESTUREINFO gi;

    gi.cbSize = sizeof(GESTUREINFO);
    gi.dwFlags = 0;
    gi.ptsLocation.x = 0;
    gi.ptsLocation.y = 0;
    gi.dwID = 0;
    gi.dwInstanceID = 0;
    gi.dwSequenceID = 0;

    BOOL bResult = GetGestureInfo((HGESTUREINFO)lParam, &gi);

    switch(gi.dwID)
    {
    case GID_PAN:
    if ((gi.dwFlags & GF_BEGIN) == 0) //not the first message
    {
    g_drawingObject.Move(gi.ptsLocation.x - lastPoint.x,
    gi.ptsLocation.y - lastPoint.y);
    // repaint the rect
    InvalidateRect(hWnd, NULL, TRUE);
    }
    break;

    case GID_ZOOM: //not the first message
    if ((gi.dwFlags & GF_BEGIN) == 0 && lastArguments != 0)
    {
    double zoomFactor = (double)gi.ullArguments/lastArguments;

    // Now we process zooming in/out of object
    g_drawingObject.Zoom(zoomFactor, gi.ptsLocation.x, gi.ptsLocation.y);

    InvalidateRect(hWnd,NULL,TRUE);
    }
    break;
    case GID_ROTATE:
    {
    //The angle is the rotation delta from the last message,
    //or from 0 if it is a new message
    ULONGLONG lastAngle = ((gi.dwFlags & GF_BEGIN) != 0) ? 0 : lastArguments
    int angle = (int)(gi.ullArguments – lastAngle);

    g_drawingObject.Rotate( GID_ROTATE_ANGLE_FROM_ARGUMENT(angle),
    gi.ptsLocation.x, gi.ptsLocation.y);
    InvalidateRect (hWnd, NULL, TRUE);
    }
    break;

    case GID_PRESSANDTAP:
    g_drawingObject.ShiftColor();
    InvalidateRect(hWnd, NULL, TRUE);
    break;

    case GID_TWOFINGERTAP:
    g_drawingObject.TogleDrawDiagonals();
    InvalidateRect(hWnd, NULL, TRUE);
    break;
    }

    //Remember last values for delta calculations
    lastPoint.x = gi.ptsLocation.x;
    lastPoint.y = gi.ptsLocation.y;
    lastArguments = gi.ullArguments;

    CloseGestureInfoHandle((HGESTUREINFO)lParam);return 0;
    }

  6. Compile and run
  7. Test all gestures.

Task 5 – There Is a Bug There!

  1. Try to rotate the object. What happened? By definition, a Window receives all gestures except rotation. We can configure the touch engine to supply only part of the available gestures or all of them
  2. Add the following code to the InitInstance() function just before the ShowWindow() call, to enable all gesture types including GID_ROTATE: //Enable all gesture types
    GESTURECONFIG gestureConfig;
    gestureConfig.dwID = 0;
    gestureConfig.dwBlock = 0;
    gestureConfig.dwWant = GC_ALLGESTURES;


    BOOL result = SetGestureConfig(hWnd, 0, 1, &gestureConfig, sizeof(gestureConfig));

  3. Compile and run
  4. Try to rotate the object. It works! Well done!