Share via


Effects

Effects process the image in some way, and together with sources and renderers they constitute the building blocks of a processing pipeline. Many effects implement both IImageConsumer and IImageProvider, and can thus be chained with other effects, a source or a renderer: FilterEffect, HdrEffect, InteractiveForegroundSegmenter, LensBlurEffect and CustomEffectAdapter. There are also a few types that process images that don't work quite in this way that are covered in this document: AutoFixAnalyzer, ImageAligner and ObjectExtractor.

AutoFixAnalyzer

AutoFixAnalyzer analyzes an image and suggests how to improve it. It can be used in combination with TemperatureAndTintFilter and/or SaturationLightnessFilter. A call to AnalyzeAsync analyzes the image and returns saturation and lightness curves, and temperature and tint parameter values. The caller can then choose to apply any combination of these to the image. In the sample below, all the returned parameters are applied to the analyzed image, and the result is rendered to a JPEG buffer.

using (var imageSource = new StorageFileImageSource(image))
using (var filterEffect = new FilterEffect(imageSource))
using (var renderer = new JpegRenderer(filterEffect))
{
    var analyzer = new AutoFixAnalyzer(imageSource);
    AutoFixAnalyzerResult autoFixSuggestions = await analyzer.AnalyzeAsync();

    var temperatureAndTintFilter = new TemperatureAndTintFilter();
    temperatureAndTintFilter.Temperature = autoFixSuggestions.TemperatureParameter;
    temperatureAndTintFilter.Tint = autoFixSuggestions.TintParameter;

    var saturationLightnessFilter = new SaturationLightnessFilter();
    saturationLightnessFilter.SaturationCurve = autoFixSuggestions.SaturationCurve;
    saturationLightnessFilter.LightnessCurve = autoFixSuggestions.LightnessCurve;

    filterEffect.Filters = new IFilter[] { saturationLightnessFilter, temperatureAndTintFilter };

    var buffer = await renderer.RenderAsync();
}
Image Result
Dn859582.autofix_analyzer_original(en-us,WIN.10).jpg Dn859582.autofix_analyzer_result(en-us,WIN.10).jpg

Mapping auto fix curves to slider values

Additional methods in the Curve class allow the user to map the saturation and lightness result of the Analyzer to a slider value. This can be done by defining two extreme curves and a slider that is used to interpolate between them.

In case of the AutoFixAnalyzer, the reverse operation is also needed, since the AutoFixAnalyzer returns curves for saturation or lightness. Again, methods in the Curve class can be used to find the interpolation value between the extreme curves that produces the closest curve.

var lowLightnessCurve = new Curve(CurveInterpolation.NaturalCubicSpline);
lowLightnessCurve.SetPoint(148, 108); 

var highLightnessCurve = new Curve(CurveInterpolation.NaturalCubicSpline);
highLightnessCurve.SetPoint(108, 148); 

var minMaxLightnessPair = new CurveMinMaxPair(lowLightnessCurve, highLightnessCurve);

using (var imageSource = new StorageFileImageSource(image))
{
    var analyzer = new AutoFixAnalyzer(imageSource);
    AutoFixAnalyzerResult autoFixSuggestions = await analyzer.AnalyzeAsync();

    var suggestedSliderValue = Curve.EstimateInterpolationFactor(autoFixSuggestions.LightnessCurve, minMaxLightnessPair);

    await RenderSourceWithSaturationValue(imageSource, minMaxLightnessPair, suggestedSliderValue);

    //Simulate user interaction:
    var fakeSliderValue = 0.3;
    await RenderSourceWithSaturationValue(imageSource, minMaxLightnessPair, fakeSliderValue);
}
                
// ...

private async Task RenderSourceWithSaturationValue(IImageProvider source, CurveMinMaxPair minMaxLightnessPair, double lightnessValue)
{
    var userModifiedLightnessCurve = Curve.Interpolate(minMaxLightnessPair, lightnessValue);

    var saturationLightnessFilter = new SaturationLightnessFilter();
    saturationLightnessFilter.LightnessCurve = userModifiedLightnessCurve;

    using (var filterEffect = new FilterEffect(source))
    using (var renderer = new JpegRenderer(filterEffect))
    {
        filterEffect.Filters = new IFilter[] { saturationLightnessFilter };

        var buffer = await renderer.RenderAsync();
    }
}
Saturation value Generated curve Rendered result
0.96 - AutoFixAnalyzer suggested value Dn859582.autofix_analyzer_suggested_curve(en-us,WIN.10).jpg Dn859582.autofix_analyzer_suggested_result(en-us,WIN.10).jpg
0.3 - User interaction value Dn859582.autofix_analyzer_interactive_curve(en-us,WIN.10).jpg Dn859582.autofix_analyzer_interaction_result(en-us,WIN.10).jpg

Blend Effect

The BlendEffect takes a background image source and blends it with a forground image source.

If an alpha channel is present in the foreground image, it is used to combine the result of the blend effect with the original foreground image. A grayscale image can be provided as a separate alpha mask, and will then be used instead of the alpha channel in the foreground image. A level property functions as a global alpha value, and is multiplied with the alpha value for each pixel to produce the actual value used.

The following code sample that blends an image consisting of a black frame around an otherwise transparent image onto another image.

using (var backgroundSource = new StorageFileImageSource(backgroundFile))
using (var foregroundSource = new StorageFileImageSource(foregroundFile))    
using (var blendEffect = new BlendEffect(backgroundSource, foregroundSource, BlendFunction.Normal))
using (var renderer = new BitmapRenderer(blendEffect))
{
    var buffer = await renderer.RenderAsync();
}
Dn859582.store_clerk(en-us,WIN.10).jpg Dn859582.blend_frame(en-us,WIN.10).jpg Dn859582.blend_result(en-us,WIN.10).jpg
Background image Foreground image Blend result

The blend effect can also work on an image and a separate alpha mask, represented by a grayscale image. This is useful for several reasons:

  • The GradientImageSource can be used to generate grayscale masks.
  • The output of the InteractiveForegroundSegmenter is a black and white mask, which can be used directly as input to the blend effect.
  • Conserving memory. See the description of the AlphaToGrayscaleFilter below for an explanation of how to save memory when blending is done repeatedly with the same image, or set of images, containing an alpha mask.

The following code sample demonstrates using a foreground image without alpha channel, and a separate grayscale image as alpha mask.

using (var backgroundSource = new StorageFileImageSource(backgroundFile))
using (var foregroundImageSource = new StorageFileImageSource(foregroundImageFile))
using (var foregroundMaskSource = new StorageFileImageSource(foregroundMaskFile))    
using (var blendEffect = new BlendEffect(backgroundSource, foregroundImageSource))
using (var renderer = new BitmapRenderer(blendEffect))
{
    blendEffect.MaskSource = foregroundMaskSource;
    blendEffect.BlendFunction = BlendFunction.Normal;        
    var buffer = await renderer.RenderAsync();
}
Dn859582.blend_2_original(en-us,WIN.10).jpg Dn859582.blend_2_foreground(en-us,WIN.10).jpg Dn859582.blend_2_mask(en-us,WIN.10).jpg Dn859582.blend_2_result(en-us,WIN.10).jpg
Background image Foreground image Foreground mask Blend result

Local Blending

Note: Available since version 1.2 beta

Blending can also be done into a target area of the background source. The TargetArea is specified with a Rect, using the unit coordinate system of the background image, i.e. the top left corner of the background image is at (0, 0), and the bottom right corner is at (1, 1). The area can also be rotated around its center, by setting TargetAreaRotation to the desired angle of counter clockwise rotation.

There is also a TargetOutputOption property that is used to control how the foreground is rendered into the target area. If set to Stretch, the foreground image will be resized to fit the target area exactly. If set to PreserveAspectRatio, the foreground image will be blended into the target area centered and with the original aspect ratio intact. If set to PreserveSize, the size portion of the target area will be ignored, and the foreground image will be blended in its original size.

The following code uses the same input images as the example above, but blends into a smaller area.

using (var backgroundSource = new StorageFileImageSource(backgroundFile)) 
using (var foregroundImageSource = new StorageFileImageSource(foregroundImageFile)) 
using (var foregroundMaskSource = new StorageFileImageSource(foregroundMaskFile))
using (var blendEffect = new BlendEffect(backgroundSource, foregroundImageSource)) 
using (var renderer = new BitmapRenderer(blendEffect)) 
{
    blendEffect.MaskSource = foregroundMaskSource;
    blendEffect.BlendFunction = BlendFunction.Normal;
    blendEffect.TargetArea = new Rect(0, 0.48, 0.3, 0.3);
    blendEffect.TargetAreaRotation = -3;
    blendEffect.TargetOutputOption = OutputOption.PreserveAspectRatio;      
        
    var buffer = await renderer.RenderAsync();    
}
Dn859582.local_blend_result(en-us,WIN.10).jpg
Blend to target area

Caching Effect

The CachingEffect flattens the source graph into a bitmap, and caches that until the user calls Invalidate. This helps the user to be explicit about avoiding costly re-rendering.

A filter graph may contain an arbitrary number of filter effects that will be applied to the source image every time it is rendered. In the example, we'll blend the result of two filter graphs FilterEffect A and FilterEffect B, which are connected to the same source. In this case, expensiveEffect will be applied twice; once to produce the FilterEffect A, and once again to produce the FilterEffect B.

Dn859582.caching_effect_before(en-us,WIN.10).png

To make this more efficient, it is possible to cache the result of expensiveEffect, by using CachingEffect, which keeps the result of the applied filter in memory. Now, the filter expensiveEffect is only applied once when blending A and B. CachingEffect also provides a method to refresh the cache using the Invalidate method.

Dn859582.caching_effect_after(en-us,WIN.10).png

var image = await KnownImages.Documentation.Aquarium.GetFileAsync();
using (var imageSource = new StorageFileImageSource(image))
using (var expensiveFilter = new FilterEffect(imageSource))
using (var cachingEffect = new CachingEffect(expensiveFilter))
using (var filterEffectA = new FilterEffect(cachingEffect))
using (var filterEffectB = new FilterEffect(cachingEffect))
using (var blendEffect = new BlendEffect(filterEffectA, filterEffectB, BlendFunction.Multiply))
using (var renderer = new BitmapRenderer(blendEffect))
{
    var blurFilter = new BlurFilter(64);
    var brightnessFilter = new BrightnessFilter();
    var grayscaleFilter = new GrayscaleFilter();
    expensiveFilter.Filters = new IFilter[] { blurFilter };
    filterEffectA.Filters = new IFilter[] { brightnessFilter };
    filterEffectB.Filters = new IFilter[] { grayscaleFilter };
    var result = await renderer.RenderAsync();
}

Custom Effect Adapter

Use a CustomEffectAdapter when the image is to be processed by a user-implemented class that implements the ICustomEffect interface. The ICustomEffect is passed into the CustomEffectAdapter when created. Note that the ICustomEffect is only weakly referenced by the CustomEffectAdapter. This is an example of the adapter pattern used in several places in the SDK. The user should provide an "outer" class that holds strong references to both the CustomEffectAdapter and the ICustomEffect. See the class reference for CustomEffectAdapter for further information. To avoid having to implement the pattern, developers using C# are recommended instead to use the subclass CustomEffectBase, which implements the adapter pattern correctly. See Custom Sources and Effects. Developers using C++ (with WRL or C++/CX) can use similar helper classes available in the "extras" repository on github.

Filter Effect

Use the FilterEffect when you want to apply one or more of the many lightweight filters included. The effect applies a list of filters to the image, one by one. A useful analogy is how optical filters can be stacked onto the lens of an SLR camera. The SDK comes with more than 50 filter implementations: Sepia, MagicPen, Antique, etc.

The application might often choose to use only a single FilterEffect in the pipeline, but since it effectively is a "filter group," it can be used to form a "preset" or "module" that can be added to or removed from a processing pipeline.

The following sections detail the use of some of the more complex filters.

Alpha to Grayscale Filter

The AlphaToGrayscaleFilter copies the alpha channel to the color channels, resulting in a grayscale representation of the alpha channel. The alpha channel is set to 255. This filter can be used to split up an image containing alpha information (e.g. coming from a PNG file) into an image with color information only, and a grayscale mask. If this is used as a preprocessing step, these two images can then later be used as inputs e.g. to the blend filter as described above, thus saving memory since JPEG files can be processed much more efficiently than PNG files.

using (var imageSource = new StorageFileImageSource(pngFile)) 
using (var filterEffect = new FilterEffect(imageSource)) 
using (var jpegRenderer = new JpegRenderer()) 
{
    jpegRenderer.Source = imageSource;
    var imageBuffer = await jpegRenderer.RenderAsync();
    var alphaToGrayscaleFilter = new AlphaToGrayscaleFilter();
    filterEffect.Filters = new IFilter[] { alphaToGrayscaleFilter };
    jpegRenderer.Source = filterEffect;
    var maskBuffer = await jpegRenderer.RenderAsync(); 
}  

Hue Saturation Lightness Filter

Note: Available since version 2.0

This filter can be used when changing the hue to correct or adjust the color tone in an image. In addition to changing the hue, the lightness and saturation can be raised or lowered for any particular hue. The HueSaturationLightnessFilter works with three curve properties:

  • HueCurve maps hue to hue. The x-axis is restricted to the values [0, 255], which represent the hue range [0, 359]. The values on the y-axis is restricted to 0-510, representing the hue range [0, 718].
  • SaturationCurve maps hue to a change in saturation. The x-axis is restricted to the values [0, 255], which represent the hue range [0, 359]. On the y-axis, the permitted range is [-255, 255], where 0 represents no change, 255 represents maximum increase in saturation and -255 represent maximum decrease in saturation, producing a black-and-white image.
  • LightnessCurve maps hue to a change in lightness. The x-axis is restricted to the values [0, 255], which represent the hue range [0, 359]. On the y-axis, the permitted range is [-255, 255], where 0 represents no change, 255 represents maximum increase in lightness and -255 represent maximum decrease in lightness.

Setting a curve property to null will leave that property unchanged. Null is the default value for all the properties.

In the sample below, we adjust hues in the green range to become blue, causing the green tones of the wall in the background to turn blue.

using (var source = new StorageFileImageSource(sourceFile))
using (var filterEffect = new FilterEffect(source))
using (var renderer = new BitmapRenderer(filterEffect))
{ 
    var filter = new HueSaturationLightnessFilter(); 
    filterEffect.Filters = new IFilter[] { filter }; 

    var hueCurve = new Curve();
    hueCurve.SetPoint(139, 139);
    hueCurve.SetPoint(140, 85); 
    hueCurve.SetPoint(185, 130); 
    hueCurve.SetPoint(186, 186);    

    filter.HueCurve = hueCurve;
        
    var buffer = await renderer.RenderAsync();
}
Dn859582.hsl_hue_change_orginal(en-us,WIN.10).jpg Dn859582.hsl_hue_change_result(en-us,WIN.10).jpg
Orginal Image Result

The hue curve maps an old hue to an new hue. Here, we map most of the range using the identity curve, but the range corresponding to green is transposed into the blue range.

Dn859582.hsl_hue_change_hue_curve(en-us,WIN.10).jpg
Hue curve

In the sample below, we increase the saturation for green hues to give the image more vibrant colors.

using (var source = new StorageFileImageSource(sourceFile))
using (var filterEffect = new FilterEffect(source))
using (var renderer = new BitmapRenderer(filterEffect))
{
    var filter = new HueSaturationLightnessFilter(); 
    filterEffect.Filters = new IFilter[] { filter }; 

    var saturationCurve = new Curve();
    saturationCurve.SetPoint(25, 0);
    saturationCurve.SetPoint(40, 255); 
    saturationCurve.SetPoint(80, 255); 
    saturationCurve.SetPoint(95, 0);
    saturationCurve.SetPoint(255, 0); 

    filter.SaturationCurve = saturationCurve;
    var buffer = await renderer.RenderAsync();
}
Dn859582.hsl_increase_saturation_orginal(en-us,WIN.10).jpg Dn859582.hsl_increase_saturation_result(en-us,WIN.10).jpg
Original image Result after increasing saturation for green hues

The saturation is increased for green hues (hue and lightness remain unchanged).

Dn859582.hsl_increase_saturation_curve(en-us,WIN.10).jpg
Saturation curve

Reframing Filter

The ReframingFilter lets the user freely reframe the image by effectively specifying a new "canvas". A reframing area is placed over the image by specifying a rectangle, a rotation, and optionally a pivot point which otherwise defaults to the center of the reframing area. This rectangle can extend outside the current boundaries of the image, and any such area will be rendered in transparent black.

Here is a code sample that performs three reframing operations on an image:

  1. The image is reframed as a close up, by setting up a ReframingArea.
  2. The area from step 1 is reframed, rotating the ReframingArea by 25 degrees using the center of the ReframingArea as a pivot point.
  3. The area from step 1 is reframed, rotating the ReframingArea by 25 degrees, this time using the top left corner of the ReframingArea as a pivot point.
using (var imageSource = new StorageFileImageSource(storageFile))
using (var filterEffect = new FilterEffect(imageSource))
using (var renderer = new BitmapRenderer(filterEffect)) 
{
    var imageInfo = await imageSource.GetInfoAsync();
    var filter = new ReframingFilter();

    filter.ReframingArea = new Windows.Foundation.Rect(180, 10, 200, 340);
    filter.Angle = 0;
    filterEffect.Filters = new IFilter[] { filter };
    var buffer1 = await renderer.RenderAsync();

    filter.Angle = 25;
    var buffer2 = await renderer.RenderAsync();

    filter.PivotPoint = new Windows.Foundation.Point(0, 0);
    var buffer3 = await renderer.RenderAsync(); 
}
Dn859582.store_clerk(en-us,WIN.10).jpg Dn859582.reframed-1(en-us,WIN.10).png Dn859582.reframed-2(en-us,WIN.10).png Dn859582.reframed-3(en-us,WIN.10).png
Original image First reframing Second reframing Third reframing

For simple crop operations within the boundaries of the original image, use the CropFilter. To rotate the image an arbitrary angle while resizing the "canvas" so that the entire original image is shown, use the RotationFilter.

Saturation Lightness Filter

Note: Available since version 2.0

This filter can be used when changing the lightness or adjust the brightness of the colors the image. The lightness can be lowered to give the image less details and when the lightness is raised, the image will show more details. Increase the saturation to give brighter colors. Decrease the saturation to zero for an black and white effect. The SaturationLightnessFilter works with two curve properties:

  • LightnessCurve: The x-axis refers to the current lightness values and the matching y-axis values will become the new lightness values. The x-axis has a range [0, 255]. Valid y-values are in the range [0, 255]. If no change to lightness is to be done, then the curve should be just linear to index values without any offsets i.e x = y.
  • SaturationCurve: The x-axis refers to the current saturation values and the matching y-axis values will become the new saturation values. The x-axis has a range [0, 255]. Valid y-values are in the range [0, 255]. If no change to saturation is to be done, then the curve should be just linear to index values without any offsets i.e x = y.

Setting a curve property to null will leave that property unchanged. Null is the default value for all the properties.

In the sample below, we use the lightness curve to increase the contrast in the shadows, and also boost the saturation somewhat using the saturation curve. Note the use of Curve.CombineIntervals to force the upper half of the lightness curve to the identity curve.

using (var source = new StorageFileImageSource(sourceFile)) 
using (var filterEffect = new FilterEffect(source))
using (var renderer = new BitmapRenderer(filterEffect)) 
{
   var filter = new SaturationLightnessFilter();  
   filterEffect.Filters = new IFilter[] { filter }; 
      
   var saturationCurve = new Curve(CurveInterpolation.NaturalCubicSpline);   
   saturationCurve.SetPoint(110, 136);    

   var lightnessCurve = new Curve(CurveInterpolation.NaturalCubicSpline); 
   lightnessCurve.SetPoint(30, 70);               
   lightnessCurve.SetPoint(90, 110);
   lightnessCurve = Curve.CombineIntervals(lightnessCurve, new Curve(), 128);      
              
   filter.LightnessCurve = lightnessCurve;           
   filter.SaturationCurve = saturationCurve; 
               
   var buffer = await renderer.RenderAsync(); 
}
Dn859582.saturation_lightness_orginal(en-us,WIN.10).jpg Dn859582.saturation_lightness_result(en-us,WIN.10).jpg Dn859582.saturation_lightness_lightness_curve(en-us,WIN.10).jpg Dn859582.saturation_lightness_saturation_curve(en-us,WIN.10).jpg
Orginal Result Lightness curve Saturation curve

HDR Effect

Note: Available since version 1.1

The HdrEffect applies local tone mapping to a single image to achieve an HDR-like effect. It can be used to apply an "auto fix" to the image, resulting in improved image quality for the majority of images. It can also be used to apply "artistic HDR" to the image.

The Strength property controls how strong the local tone mapping effect will be on the image. With a higher strength setting, more noise is introduced, and this can be suppressed using the NoiseSuppression property. If strength is set to a high value and noise suppression is kept low, the effect will produce dramatic, surrealistic images.

The effect also has properties to control global Gamma and Saturation. For both these properties, 1.0 implies no change. For saturation, values lower than 1 will decrease the saturation in the image, and values greater than 1 will increase the saturation in the image. For gamma, values lower than 1 will produce a lighter image, and values greater than 1 will produce a darker image.

The following example demonstrates how the default settings produce an improved image, and how modifying the settings can result in a much more dramatic image:

using (var source = new StorageFileImageSource(sourceFile)) 
using (var hdrEffect = new HdrEffect(source)) 
using (var renderer = new BitmapRenderer(hdrEffect)) 
{
     var improvedBuffer = await renderer.RenderAsync();    

     hdrEffect.Strength = 0.9; 
     hdrEffect.NoiseSuppression = 0.01;   
     var artisticHdrBuffer = await renderer.RenderAsync(); 
}  
Original image Image improved with HDR Artistic HDR
Dn859582.hdr_original(en-us,WIN.10).jpg Dn859582.hdr_autofix(en-us,WIN.10).jpg Dn859582.hdr_artistic(en-us,WIN.10).jpg

Image Aligner

Note: Available since version 1.2 beta

The ImageAligner is used to align a series of images that differ by a small movement, e.g. a series of images taken in the burst capture mode available in Windows Phone 8.1. Alignment works for small movements only, for example those that occur when the user tries to hold the camera still, and quickly degenerates if the images are moved too much. It also works only on constant or near constant exposure settings.

Start the alignment by assigning a list of image sources to the Sources property. Optionally, the ReferenceSource property can be set to specify which image in the list will serve as a reference image in the alignment process. The other images will then be modified to become aligned with this. If not set, or explicitly set to null, the ReferenceImageSource will default to the middle element in the source list.

When the sources are set, you can call the CanAlignAsync method to find out if it is possible to align a particular image source. One or more images may fail to align without the whole alignment process failing. If a source can be aligned, an image source for the aligned image is retrieved by calling AlignAsync. This method will throw an exception if it is called for a source that cannot be aligned.

The example below tries to align a list of images, using the second source as reference, and saves successfully aligned sources. The input and output are visualized as animated GIF images. See the documentation on the GifRenderer for information about how to render animated GIFs.

using (var aligner = new ImageAligner())
using (var renderer = new JpegRenderer()) 
{
    aligner.Sources = unalignedSources;
    aligner.ReferenceSource = unalignedSources[1];
        
    var alignedSources = await aligner.AlignAsync(); 
     
    foreach (var alignedSource in alignedSources) 
    {   
        if (alignedSource != null)
        {
            renderer.Source = alignedSource;
            var alignedBuffer = await renderer.RenderAsync();
            Save(alignedBuffer);
        }
    }
}
Dn859582.aligner_unaligned(en-us,WIN.10).gif Dn859582.aligner_aligned(en-us,WIN.10).gif
Unaligned images Aligned images

Interactive Foreground Segmenter

Note: Available since version 1.1

The InteractiveForegroundSegmenter segments the image into foreground and background based on annotations to the image provided by the end-user.

As input, InteractiveForegroundSegmenter takes the image to segment and an annotation image where representative parts of the foreground and background areas in the image have been marked using the foreground and background colors that can be set on the object. Using these annotations, it segments the image and generates a mask where the foreground is white and background is black.

Here is an example that uses the interactive foreground segmenter and blend filter to adjust the hue of the foreground of the image. The user provides us with "UserAnnotations" image, where the red area represents the foreground of the photo, and the blue area represents the background.

Main image User annotations Overlay demo Result mask Final result
Dn859582.segmenter_source(en-us,WIN.10).jpg Dn859582.segmenter_annotations(en-us,WIN.10).png Dn859582.segmenter_overlaid(en-us,WIN.10).png Dn859582.segmenter_result(en-us,WIN.10).jpg Dn859582.segmenter_blend_result(en-us,WIN.10).jpg

Here is the code used to produce the final result above, assuming the user annotations are loaded with a StorageFileImageSource:

using (var source = new StorageFileImageSource(MainImage))
using (var annotations = new StorageFileImageSource(UserAnnotations))
using (var redCanvas = new ColorImageSource(new Size(300, 370), Color.FromArgb(255, 255, 0, 0)))
using (var segmenter = new InteractiveForegroundSegmenter(source))    
using (var blendEffect = new BlendEffect(source, redCanvas, segmenter, BlendFunction.Colorburn, 0.7))
using (var renderer = new JpegRenderer(blendEffect))
{
    segmenter.AnnotationsSource = annotations;
    segmenter.ForegroundColor = Color.FromArgb(255, 251, 0, 0);
    segmenter.BackgroundColor = Color.FromArgb(255, 0, 0, 250);

    var buffer = await renderer.RenderAsync();
}

One could also use a WriteableBitmap to allow the user to draw on a canvas and then use the resulting image as annotations. Here is a code sample that demonstrates creating a WriteableBitmap, drawing on it, and finally using it as an annotations source:

WriteableBitmap bmp = new WriteableBitmap(100, 100);
bmp.DrawLine(20, 10, 20, 90, System.Windows.Media.Color.FromArgb(foreground.A, foreground.R, foreground.G, foreground.B));
bmp.DrawLine(50, 30, 50, 70, System.Windows.Media.Color.FromArgb(background.A, background.R, background.G, background.B));
bmp.DrawLine(80, 10, 80, 90, System.Windows.Media.Color.FromArgb(foreground.A, foreground.R, foreground.G, foreground.B));

Bitmap userAnnotations = bmp.AsBitmap();

using (var annotations = new BitmapImageSource(userAnnotations))
{
    // ...
}

Note: WriteableBitmap's extension method AsBitmap can be found in the Lumia.InteropServices.WindowsRuntime namespace. The DrawLine extension method is part of the WriteableBitmapEx library.

Segmentation is usually an iterative process, meaning that the user will start with a crude version of annotations and inspect the output that the InteractiveForegroundSegmenter generates. The user will then find the areas where the segmentation could be improved, add more annotations to the original annotations image, and render again. This process continues until the user is satisfied with the result.

Please note that the segmentation process can fail if there is not enough information within the AnnotationsSource image. The bare minimums are one pixel in each foreground and background color; however, usually more will be required. If the segmentation cannot be completed successfully, an ArgumentException will be thrown with a message "Segmentation could not complete successfully. Try adding more annotations to AnnotationsSource."

Segmentation is an expensive operation, so it cannot be performed on all images with default parameters. To allow processing even on large images, the Quality property should be used. It affects the working size of the algorithm, and therefore a lower quality setting will improve both the memory consumption and the processing time of the effect.

Lens Blur Effect

Note: Available since version 1.1

The LensBlurEffect applies blur to an image in a way similar to how out-of-focus areas are rendered by a lens, an effect also known as bokeh. The effect supports the setting of kernels that correspond to different aperture shapes. There are several predefined shapes included in the SDK (circle, hexagon, flower, star, and heart), and custom, user-defined, shapes are also supported.

Lens blur can be applied to the whole image, or alternatively the user can specify a focus area where no blur will be applied. Different areas of the image can be blurred with different kernels. The user specifies this, and optionally also a focus area, with the kernel map.

A kernel map is a grayscale image where each pixel value represents the index of the kernel that will be applied to the corresponding image pixel. The expected values differ regarding the setting in KernelMapType: The value reserved for the focus area can be either 0 (Continuous) or 255 (ForegroundMask). To give an example, if we have a use case where the center of the image should not be blurred and the KernelMapType is set to ForegroundMask, the center of the kernel map image should have a value of 255.

If we want it to be blurred with the first kernel we have provided, the value should be equal to 0, for the second kernel, it should be equal to 1, and so forth. LensBlurEffect takes an IImageProvider as a KernelMap input, allowing the developer to provide the kernel map from a wide range of sources, or it can be generated with a GradientImageSource or BufferImageSource.

The following example applies the lens blur effect on the background of the image, while the foreground remains in focus. It uses a mask created by the interactive foreground segmenter as kernel map.

MainImage Image with annotations Result
Dn859582.lens_blur_source(en-us,WIN.10).jpg Dn859582.lens_blur_annotations_overlaid(en-us,WIN.10).jpg Dn859582.lens_blur_result_segmented_mirroring_on(en-us,WIN.10).jpg
using (var source = new StorageFileImageSource(mainImage))
using (var annotations = new StorageFileImageSource(userAnnotations))
using (var segmenter = new InteractiveForegroundSegmenter(source))
using (var lensBlurEffect = new LensBlurEffect(source, new LensBlurPredefinedKernel(LensBlurPredefinedKernelShape.Circle, 30) ))
using (var renderer = new JpegRenderer(lensBlurEffect))
{
    segmenter.AnnotationsSource = annotations;
    segmenter.ForegroundColor = Color.FromArgb(255, 251, 0, 0); 
    segmenter.BackgroundColor = Color.FromArgb(255, 0, 0, 250); 
    
    lensBlurEffect.KernelMap = segmenter;
    var buffer = await renderer.RenderAsync();
}

Special attention is required by one specific area of the image: the border between the focus area and the edge area. There is extra work required to make this area look natural, and the correct behavior largely depends on the context. Specifically, it is important whether the border follows some natural lines on the image, such as an outline of a person, or is arbitrary. LensBlurEffect allows you to provide this information by setting the FocusAreaEdgeMirroring property. It is an enum with two options:

  • LensBlurFocusAreaEdgeMirroring.On should be used when the border between focus and blurred area follows some natural lines within the image.
  • LensBlurFocusAreaEdgeMirroring.Off should be used when the focus and blurred areas are arbitrary.

Here are some images that show the difference:

MainImage KernelMask Result with LensBlurFocusAreaEdgeMirroring.On Result with LensBlurFocusAreaEdgeMirroring.Off
Dn859582.lens_blur_source(en-us,WIN.10).jpg Dn859582.lens_blur_segmented_mask(en-us,WIN.10).jpg Dn859582.lens_blur_result_segmented_mirroring_on(en-us,WIN.10).jpg Dn859582.lens_blur_result_segmented_mirroring_off(en-us,WIN.10).jpg
Dn859582.lens_blur_source(en-us,WIN.10).jpg Dn859582.lens_blur_gradient_mask(en-us,WIN.10).jpg Dn859582.lens_blur_result_gradient_mirroring_on(en-us,WIN.10).jpg Dn859582.lens_blur_result_gradient_mirroring_off(en-us,WIN.10).jpg

As you can see, both options have a valid use case, and it is up to the developer to decide which setting to choose for her use case.

It should be noted that the lens blur is an expensive operation, requiring a lot more resources than a normal BlurFilter. There is a reason for the increased complexity: lens blur's result is of a lot higher quality, it produces more photo realistic images. The cost can be regulated; the effect can do the bulk of the processing on a smaller image without significantly affecting the end result's quality. The user can regulate the working size with the Quality property, allowing the user to apply the effect even to large images. A lower quality setting will reduce both memory consumption and processing time of the effect. The size of each kernel used by the effect is also affected by the Quality property of the effect, so you as the developer don't need to adjust those sizes when changing the parameters of the LensBlurEffect. That being said, a Quality setting below 1.0 is a compromise, and it will produce worse results in blurred areas of the image.

Object Extractor

Note: Available since version 2.0

If we have an image and a mask that defines objects in the image - obtained using the interactive foreground segmenter or by some other means - the foreground objects can be extracted and manipulated separately using the ObjectExtractor.

In the sample below, we use a mask to extract two objects from an image. We then paste them onto a red background using the blend effect. A flip filter is used to flip one of the objects, and we use the blend effect's target area to position and manipulate the relative sizes of the objects.

using (var source = new StorageFileImageSource(imageStorageFile)) 
using (var maskSource = new StorageFileImageSource(maskStorageFile)) 
using (var extractor = new ObjectExtractor(source, maskSource)) 
using (var flipFilterEffect = new FilterEffect() { Filters = new [] { new FlipFilter(FlipMode.Horizontal) } }) 
using (var blendEffect1 = new BlendEffect()) 
using (var blendEffect2 = new BlendEffect()) 
using (var finalBackgroundSource = new ColorImageSource(new Size(300, 300), Color.FromArgb(255, 255, 25, 25))) 
using (var jpegRenderer = new JpegRenderer(blendEffect2)) 
{ 
    var extractedObjects = await extractor.ExtractObjectsAsync();
    
    flipFilterEffect.Source = extractedObjects[0];

    blendEffect1.Source = finalBackgroundSource;  
    blendEffect1.TargetOutputOption = OutputOption.PreserveAspectRatio; 
    blendEffect1.TargetArea = new Rect(0.20, 0.2, 0.15, 0.15); 
    blendEffect1.ForegroundSource = flipFilterEffect;  

    blendEffect2.Source = blendEffect1; 
    blendEffect2.TargetOutputOption = OutputOption.PreserveAspectRatio; 
    blendEffect2.TargetArea = new Rect(0.35, 0.05, 0.45, 0.9); 
    blendEffect2.ForegroundSource = extractedObjects[1]; 
  
    jpegRenderer.Source = blendEffect2; 
    var buffer = await jpegRenderer.RenderAsync();
}
Dn859582.object_extractor_image(en-us,WIN.10).jpg Dn859582.object_extractor_mask(en-us,WIN.10).jpg
Image Mask
Dn859582.object_extractor_object_1(en-us,WIN.10).jpg Dn859582.object_extractor_object_2(en-us,WIN.10).jpg Dn859582.object_extractor_result(en-us,WIN.10).jpg
Extracted object Extracted object Result