AdaBoostTrigger

The AdaBoostTrigger is a detection technology that produces a binary or discrete result. It uses an Adaptive Boosting (AdaBoost) machine learning algorithm to determine when a user performs a certain gesture.

During training time it accepts input tags, boolean values, which mark the occurrence of a gesture, such as a hit. This marking or tagging is used to evaluate whether or not a gesture has happened and determines the confidence value of the event.

Note

You can override the AdaBoostTrigger project settings in the solution global settings.

Note

When using AdaBoostTrigger, you must include the vgbtechs\AdaBoostTech.dll runtime component with your application. For more information, see Visual Gesture Builder Headers, Libraries, and Assemblies.

  • Input Parameters
  • Run Time Data

Input Parameters

The following table describes the input parameters that you can use when tagging a gesture. You enter these parameters in the Project Settings grid, as shown in the Visual Gesture Builder Training Project.

Property Name Type Description
Accuracy Level FLOAT

A floating point value in the range [0..1].

This value controls how accurate your results are, but also affects the training time. The higher the accuracy, the longer the training time. As a guideline you can use the following values:

  • Retail Build – 0.98.
  • Release Build – 0.95.
  • Quick Experiment – 0.8.
Number of Weak Classifiers at Run Time INT The algorithm can potentially generate tens of thousands of weak classifiers. Using all of them will increase accuracy, but at the higher CPU cost on the computer. To use all the classifiers generated, use a value of 0.

The CPU cost for 1000 weak classifiers is only 25 microseconds and the results have more than adequate accuracy.

Filter Results BOOL

The results of the algorithm are based on a frame, not on a gesture. A filter needs to be applied on the raw per frame results. The AdaBoostTrigger provides a simple low latency filter, but you have the option of disabling filtering and applying your own filter at run time on the computer.

The filter used is a simple sliding window of N frames, summing up the results and comparing it against a threshold value. The number of frames can be seen as a frequency and the threshold can be seen as amplitude.

Auto Find Best Filtering Params BOOL When filtering is switched on, you have the option to let the trainer automatically find best filtering parameters which will minimize the rate of false positives and false negatives.
Weight Of False Positives During Auto Find FLOAT

A value in the range of [0..1] which is used when automatically finding the best filtering parameters.

If it is more important to reduce false positives, use a higher value. If it is more important to reduce false negatives, use a lower value.

Manual Filter Params: Number of Frames To Filter INT

A value in the range of [1..10].

You can choose the number of frames over which to filter if you do not choose to find the best filtering parameters automatically.

Manual Filter Params: Threshold FLOAT

A value in the range of [0..1].

Lower values could increase true positives, at the risk of increasing false positives.

Higher values could decrease false positives, at the risk of decreasing true positives.

Duplicate And Mirror Data During Training BOOL Body data can be duplicated and mirrored in order to have a larger set of training data.
% CPU For Training INT A value in the range of [0..100] which indicates the percentage of CPU resources that the trainer should use for training.
Use Hands Data BOOL

Default to false. This means by default the hand states are not used for training and detection. For training and detection to use the hand states, set this property to true.

Note
Hands data is only available for up to two users. If your application supports more than two simultaneous users, you should not use hands data during gesture training.
Ignore Left Arm BOOL

Default to false. This means by default the following left-arm joints are used for training.

  • elbow
  • wrist
  • hand
  • hand tip
  • thumb

For training to ignore the left-arm joints, set this property to true. This is useful when you train a right-hand gesture and when the signals from the left hand need to be ignored.

Ignore Right Arm BOOL

Default to false. This means by default the following right-arm joints are used for training.

  • elbow
  • wrist
  • hand
  • hand tip
  • thumb

For training to ignore the right-arm joints, set this property to true. This is useful when you train a left-hand gesture and when the signals from the right hand need to be ignored.

Ignore Lower Body BOOL

Default to false. This means by default the following lower-body joints are used for training.

  • knees
  • ankles
  • feet

For training to ignore the lower-body joints, set this property to true. This is useful when you train a gesture that uses upper body only and when you want a gesture to be applicable for both seated and standing positions.

Run Time Data

An AdaBoostTrigger gesture will be represented as a DiscreteGestureResult at run time. The DiscreteGestureResult should be used to check if the gesture has been detected and at what confidence.

Remarks

In training, if you only tag positive frames, AdaBoostTrigger uses all frames by default. The frames not tagged are used as negative frames. This generates a big training set, if the clips are very long.

In training, if you intend to use only some frame ranges, instead of all the frames of a clip, you can explicitly tag negative frames. During training, the tool detects these explicitly tagged negative frames and ignores the non-tagged frames.

The tool UI is designed to help you convert from implicit negative to explicit negative tagging.

When you select a range of frames and set its value to FALSE, if there are non-tagged frames (gap) in the range, only those non-tagged are set to FALSE. All other existing tagged frames are not affected.