Home Where’s the Money Now? Part 12 of 18: Controlling Customers

Where’s the Money Now? Part 12 of 18: Controlling Customers

How successful was your last customer marketing initiative?

There are a number of ways to measure this success, but the key question for most casinos is: How much money did we make?

Unfortunately, this seemingly simple question is made more complex by a number of key considerations, namely the volatility of customer visitation patterns, short term versus long term, continuous churn and the impact of luck.

Before we dig into our analysis this month, let us first revisit some of the key concepts that will come into play.

Primer on Churn

Churn, also known as customer attrition, is the simple loss of a customer. It can also be called turnover or defection. Churn is a critical measure in many businesses and the subject of many marketing studies, as it is six to seven times more difficult1 to acquire new customers than it is to prevent customers from defecting. There are two quite different kinds of churn: voluntary and involuntary. Voluntary churn happens when the customer chooses to defect, while involuntary churn occurs when events such as a change of address result in the customer leaving. Measures to protect customers from voluntary churn range from improving customer services to improving marketing activities.

To add a twist to this, a dramatic change in behavior, such as decreasing visits from once a week to once a year, can also be classified as a churn event. Involuntary churn can also be twisted. Consider a property that has both local and destination customers. In this situation, involuntary churn becomes an opportunity for a local customer to become a destination customer. As described in the June 2013 issue of CEM,2 taking advantage of these opportunities requires accurate monitoring of customer address attributes.

Primer on Volatility

The volatility of a customer’s visitation patterns is measured by the variation in the time between casino visits or in the amount of spend. This volatility is an extreme challenge in the gaming industry. To illustrate this, compare the spend pattern of a customer going to a food retailer to buy milk with a customer going to a casino. Consumers buying milk tend to do so on a very regular and predictable basis; in fact, if you do drink milk, the use-by date is normally within about a week.3 A casino patron, however, has no use-by date on their visitation—although we might want to create incentives to simulate one.

Long-term Behavioral Changes

Over a long period of time, customer volatility drops out of the equation, and the long-term customer relationship enters. One useful concept we will explore in this and other articles is the customer life cycle and how customer relationships evolve and change over a long period. These long-term relationship cycles are as hard to stop as the rotation of the Earth with this fundamental long-term change moving throughout the customer base. The analytical challenge is how to see the change in the business, looking beyond specific customers to customer behavioral groups. Involuntary churn exhibits an abrupt pattern, creating a dramatic change in a single day.

Short-term Responses

Short-term customers are very random in their response to marketing programs. This creates a real problem. In our experience4 in the short term, individual casino customers are somewhat erratic in their response to marketing programs. This erratic behavior possibly underpins the behavioral patterns of the entertainment industry in general. Compared with consistent products such as financial services, electricity or cell phone usage, gaming and other entertainment products are very much a matter of choice. Furthermore, this customer choice is often very spontaneous. This erratic behavior requires us to tackle, test and control marketing (discussed in detail later in this article) in a heavy-handed way (for example, the control percentage of 50 percent of customers has been proven to be effective).

The Problem with Analysis

With churn, volatility and short-term responses at play, it’s no wonder there are so many issues with accurately measuring the success of marketing programs—especially when many of the common measurements are flawed.

Year-over-year Analysis

Many marketing departments employ year-over-year analysis to determine the success or failure of their marketing programs. There are two key problems with this approach:

1. It doesn’t permit an understanding of which marketing program is the driver of changes in year-over-year revenue.

2. It ignores the impact of external factors such as new competition, changes in the overall economy, weather and casino expansion.

Let’s tackle these issues one at a time. Imagine that casino revenues are up 5 percent. The marketer runs a report and finds that this 5 percent lift is due to an increase in medium-value customers while low- and high-value customers are flat. “Aha!” says the marketer. “My mid-value programs are working!” The problem is this marketer doesn’t know which mid-value programs are working. There is an old adage in advertising attributed to John Wanamaker that says, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” This certainly applies to the marketer who uses year-over-year analysis to evaluate his or her programs.

The second problem is just as pervasive, if not more, in gaming. External factors are constantly working for and against our casino revenues. During good times, the tendency is to say, “Our revenues are up 10 percent, so our marketing programs are great!” During bad times, the tendency is to assume that all the marketing programs are failing, so they get thrown out and new programs come in, only to fail again (sometimes worse). This problem occurs not only in year-over-year analysis but also in redemption analysis.

Redemption Analysis

Imagine a postcard containing a $10 offer sent to 10,000 players. Redemption analysis says that if 4,000 players redeem that offer ($40,000 expense), and those players generate $100,000 of play the day they redeem, then the program has a profit of $60,000 and an ROI of 150 percent.

What this analysis doesn’t do is answer any of the following questions:

• How many players would have showed up that day without the $10 offer?

• How many players would come without any offers?

• How many players were driven by a different offer?

• How many players shifted their play from a different day in order to redeem that offer?

• Was $10 the optimal offer for all 10,000 players?

Worse, there are some very real consequences—indeed, brutally realistic pitfalls—when this analytical technique is used:

• A tendency to target high-frequency guests with an offer that they will redeem just because they are already in the building.

• A missed opportunity to drive incremental play from low-frequency guests.

• Favoring programs that drive redemptions rather than those that drive profitability.

• During good times, bad programs can look good.

• During bad times, good programs can look bad.

The Solution: Test and Control

Scientific principles provide an absolutely accepted and formal way of handling this analytical question: experimentation with control. Translated into marketing speak, that is test and control.

To effectively evaluate programs, a random sample must be held out of the mail called a control group. The mailed group’s performance is then compared to this control group’s performance. Using this method, we can evaluate how much the program drove incremental play.

We present here a simple example that allows us to see how the test and control method can more accurately measure the profitability of a mail campaign compared to typical redemption analysis.

Imagine that we perform the same mailer campaign as above—10,000 players receiving a $10 offer with 4,000 players redeeming that offer. But in this example, there are actually 20,000 players eligible for the offer. We split the 20,000 players into two groups: 10,000 who receive the offer and another 10,000 who are randomly selected to receive no offer at all. We assume that the offer is valid for one day only, Oct. 4.

In Figure 1, we see the outcome of this offer. The first table shows the outcome on Oct. 4 for the test group, which received the offer. This was our original example of redemption analysis, where the test group generated $100,000 in revenue and $40,000 in expense, for a profit of $60,000. But this is not actual profit. The control group—customers with the exact same play patterns except for the fact that they received no offer for Oct. 4—generated $45,000 in revenue that day anyway. Thus, the incremental revenue generated by the offer was $100,000 – $45,000 = $55,000. The expense remains at $40,000, so the actual profit on Oct. 4 was, in fact, only $15,000.

But it gets worse! This analysis only looks at play on Oct. 4. Remember, our questions from earlier? What if the offer simply enticed players to change the date of play from Oct. 3 or Oct. 5 to Oct. 4? We can measure this effect by including shoulder dates in the analysis. Figure 2 shows exactly this.

In this analysis, we expand the scope to include “shoulder dates” of three days prior and three days after the offer date of Oct. 4. (For simplicity’s sake, we assume that no other offers were valid from Oct. 1 to Oct. 7.) In Figure 2, we see that from Oct. 1 to Oct. 7, the test group generated $200,000 in revenue, but the control group, with no offers at all, generated $165,000 in revenue. Thus, the incremental gaming revenue is in fact only $35,000. Comparing with Figure 1, we can conclude that $20,000 in gaming revenue was simply shifted from other days to the offer date of Oct. 4. Subtracting the $40,000 expense, the analysis proves that our offer was actually unprofitable.

Using test and control techniques can shed an amazing amount of light on our marketing campaigns. What appears at first glance (via redemption analysis) to be a hugely profitable program turns out, in fact, to be unprofitable.

Before we move on, a quick note on testing mechanisms. First, gaming data is highly volatile, as discussed earlier in this article. As such, it is possible that a “random” split of your customers into test and control groups could lead to one group having better customers than the other. There at least two ways to minimize this issue. First, segmentation of customers into smaller groups of players with similar behavior is not only good from the standpoint of more targeted offers, but it also reduces the chance of having “unbalanced” test and control groups. Second, one can verify directly, before the test is finalized, that the groups have similar overall behaviors. If the data is skewed in favor of one group or the other, re-run the randomization.

The Upside of Testing

The great thing about market testing is that bad tests can be abandoned while good tests can be implemented for months and sometimes years into the future. Thus, test and control is a process that builds on past successes. As a simplified example, let’s imagine that, through a combination of keeping good tests and throwing out bad ones, a casino is able to improve its direct mail profits by $10,000 each month. At the end of the year, the total profit is increased not by $120,000, but by $780,000 ($10,000+$20,000+$30,000+…+$120,000=$780,000).

The other benefit is that marketers, by their very nature, are motivated to come up with new and innovative ideas on a regular basis. If they aren’t using test and control, they will have no real idea if a crazy idea is going to work, which reduces the incentive to give it a try. However, with testing, no idea is too crazy if you can test it to determine if it is profitable.

Bringing It All Together

What we have illustrated here is that marketing with test and control provides true analysis of the results. However, this true analysis does not often result in an immediate lift in the business. Instead, it provides gradual and continuous improvement in the ongoing business. The opposite is also true: Inaccurate marketing may not result in an immediate drop in business, but it is more likely to bring on the beginning of a slow decline. An analogy of this is a super tanker. Turning the wheel of the ship has almost no immediate effect, but once the ship is turning, it is hard to stop. In our experience, the time periods for “turning the ship” are measured in months, but the results will be felt, and the final “direction” will be a dramatic change. In the world of gaming, it is our experience that marketers are responsible for revenue (despite the product also being of utmost importance), so having ways to show how the ship is turning is fundamental to take us from a world of gut instincts and reactionary management to a world of entrepreneurial goal seeking.

Footnotes
1 Extracted from http://www.businessnewsdaily.com/3921-customer-service-reduces-client-ch… on October 2013.

2 Refer to CEM http://www.casinoenterprisemanagement.com/articles/june-2013/where%E2%80….

3 Refer to http://askville.amazon.com/long-milk-fridge-drink-recipe/AnswerViewer.do….

4 It is the authors’ intention to cover this topic in more depth in subsequent articles.

Leave a Comment