What are they?

A control group can be added to any activity. Adding a control group to your activity will make sure that randomly selected players within the targeted segment won't receive any player engagement from the activity.

Why are they used?

The simple reason why you would use a control group is to test and evaluate if a campaign was successful. Comparing the numbers from players that received the offer and didn't receive the offer, will give you an idea of how successful the campaign was.

With this information, you can then evaluate if it’s potentially worth running with a similar campaign in the future or maybe even stop an ongoing campaign. Or else, it can point toward some amendments being needed.

Follow up on the numbers for your activity in the Performance and Conversion Dashboard. Read more about Analysing the Results further down on this page. Still not convinced? Get inspired;

How does it work?

Inside the activity or lifecycle, you can set the percentage of your control group. Players inside your segment will be picked at random to be included in the control group - all according to the percentage you've set.

How to set it up

When you build your activity, beneath your action group(s) you'll find the option to add a control group. With a simple click, you activate the control group - setting the delivery percentage to a default 10%. This can easily be changed to any percentage of your liking, however we believe that 10% is an ideal number.

How to add a control group
How to add a control group

Analysing the Results

There's no point implementing control groups unless you invest time in analysing the results.

Once your activity has run, it's time to compare the conversion numbers for the players who received the player engagements and the ones who received nothing at all.

Tracking conversion

It's important that your activities are tracking conversion in the first place. The option to track conversion is set inside your activity when you build it. By default, your activities are tracking conversion for 24 hours, but you can also choose to track the conversion for anywhere between 48h - 10 days:

Make sure to track conversion
Make sure to track conversion

If this is not set up in your activities, the conversion numbers will not be stored for you, which means you can not follow up on any conversion data.

Understanding the numbers

You can access all the collected conversion data inside of the Performance and/or Conversion Dashboard. There’s also a shortcut to the conversion numbers (taken from the Conversion Dashboard) for the specific activity if you hover over the three-dot menu from the activity overview:

Conversion Dashboard shortcut
Conversion Dashboard shortcut

Checking the Conversion Dashboard of your activity you can see the top section of numbers, displayed in the picture below.

Example 1 of conversion numbers for an activity
Example 1 of conversion numbers for an activity

This is where you can compare the numbers for your action group(s) (containing players who received the player engagements) with your control group (containing the players who didn't receive anything).

Example 1: Looking at the numbers, from the activity displayed in the above example, you can see that Action Group A is displaying better and more successful numbers in all areas in comparison to the Control Group. However, if you look at the 'Summary' it states the following; "No Significant Difference Between Action Group and And Control Group Conversion".

Example 2: Looking at the numbers, from the activity displayed in the below example instead, you can see that Action Group A is overall displaying better and more successful numbers in comparison to the Control Group - however not in all areas - like in the above example. Still, the 'Summary' states the following; "Action Group Conversion Is Significantly Better Than Control Group".

Example 2 of conversion numbers for an activity
Example 2 of conversion numbers for an activity

In comparison to their control groups, this means that the Action Group A in example 2 was performing better than Action A in example 1; even though the overall numbers for the Action Group A in example 1 was better. Why is this you might ask? We will explain the logic a bit more in detail next.

The Summary Logic

The Summary is based on a statistical test and takes three things into consideration:

  • Number of fires (how many players was included in the Action Group/Control Group)

  • Conversion rate (deposits / fires)

  • The sample size

To highlight, the summary is not looking at the login rate, the value of deposits, etc. Going back to the conversion results of the 2 activities in the previous section. Looking at this new information, we have the answer to why, in example 2, the Action Group A was considered significantly better than the correlated control group and not in example 1. In example 2, the Conversion Rate of Action Group A was high enough, in comparison to the control group, to be considered significantly better.

How the Sample Size can affect the Summary outcome

Let’s take another example of an Action Group vs. a Control Group:

Example 3:

  • Action Group A has 1500 fires and 250 players made a deposit, which makes the conversion rate 16.67%

  • The Control Group had just 10 fires and 1 players made a deposit, which makes the conversion rate 10%

Does this mean that the Action Group A is better because it has a higher conversion rate?

In statistical terms, even though Action Group A is doing better than the Control Group, the fact that it only fired to 10 players means that the sample size is not big enough to justify that it’s better. Since the control group is very small, this is taken into consideration. Because there's only 10 people the logic is taking into account that this might just as well be a coincidence and not a trend. If the control group had 100 fires instead and 10 of the players made a deposit the conversion rate would still be 10%, but now we are more confident that the action group is actually doing better because we have a much larger sample to base the conclusion on. So, to sum it up; The statistical test checks if the difference between the conversion rates, when taking the sample size into account, is big enough to be significant or if it’s just potentially just a coincidence.