Using experiments to improve my articles

DOCS experimentation platform

DOCS contains a built-in experimentation platform allowing authors to test in-page article changes with a subset of their audience. For this all you need is to create one more .md file with the changes and save it in the same directory in GitHub as the original one. Once you have this email skyeye@micorosft.com and we will get the experiment live. Publishing the experiment should take <1h assuming the .md is good.

Note the a self-serve tool to publish an experiment is being built, also the platform is being expended to allow more types of experiments in addition to in-page ones. Other types of experiments to be supported include TOC, right rail, header etc.

Once the experiment is started based on the configuration provided a % of users (e.g. 20%) will see the alternative page until a set number of page views is hit (e.g 1000 page views) and the experiment automatically stops and all users see the original .md only.

You can see side by side all the stats including verbatims, LiveFyre comments, ratings on the SkyEye Experiments report. The SkyEye Search also allows filtering using the filters on the left to just articles with an experiments either completed or in progress and from there you have a direct link to those experiments results.

You would use the metrics reported into the SkyEye Experiments report to determine if you want to go mainstream with those changes and stick with the original. Between other things the experiment helps you expose only a small subset to a change you have in mind before going mainstream.

To get an experiment going or for more information email skyeye@microsoft.com.

What is an experiment?

What makes an experiment different than other types of data analysis are 3 parts: making a controlled change to the user experience, making a prediction about the effect (the hypothesis), and intentionally structuring data capture to measure the actual effect. Experimentation may sound intimidating, but is actually an approachable and intuitive way for non-data scientists to become more data-driven. Why? Because a well-designed experiment yields results that are easy to interpret. Simply comparing the predicted effect to the actual effect is where the learning happens. We invite you to give it a try!

The CSI data science team will partner with writers and PMs to enable A/B tests, before vs. after tests, email-driven experiments, usability tests, surveys, and click-pathing experiments. Interested? Drop us a note with your experiment idea: apexds@microsoft.com.