Are you enjoying a steady increase in net revenue from each of your patrons? Spending less on offers and promotional activity but seeing an increase in total coin in? If so, congratulations. You’re bucking the trend and have obviously found an ideal marketing technique for your organization. If not, then welcome to the club. The sad reality is that the casino gaming industry has become a poster child for putting too much bait on the hook when it comes to customer promotions.

With offers arriving in their mailboxes every month, it’s not surprising that many customers feel a sense of entitlement when it comes to free play and/or cash offers. And why shouldn’t they? The offers are consistent—most often based on a level of play that was itself controlled by the customer, the mail pieces have a similar look and tone, and they arrive with the predictability of a Social Security check. But if the consistency of play is what makes the offers so consistent, then might that same play still happen even with a smaller promotional offer? Or with no offer at all? The answer is simple: It depends on the player.

“Offer sensitivity” describes a given player’s response to changes in the value of promotional offers. A player with high sensitivity would adjust his level of play significantly in response to an increase in offer value (his play would increase) or a decrease in offer value (his play would decrease accordingly). A player with low sensitivity would not show much of a change in play level—or his play activity might actually be opposite to the change in offer value. Note that this sensitivity specifically refers to the level of play (i.e., ADT, average daily theo) during a visit and not to the frequency of visits.

The mere suggestion of reducing the promotional offer value to patrons can make some casino marketers recoil in horror. That cut would cause some number of people to not visit and therefore have a negative effect on revenue. And that’s partly correct: There would be people who choose not to visit as a result, but careful targeting can help to ensure that they are also the people who contribute less revenue than the value of the offer they typically receive. Total revenue is slightly reduced by their absence, but costs are reduced more—so net revenue actually increases.

The key lies in the targeting accuracy of these changes in offer amounts. Retailing tycoon John Wanamaker once said, “I know that half of my advertising doesn’t work. The problem is, I don’t know which half.” So it’s important to distinguish between “profitable” and “effective” when evaluating your existing marketing activities. A campaign designed to increase traffic might be judged effective based on the number of patrons visiting a property during the campaign period (maybe even showing an increase when compared with a prior period). Your operation may be evolved to the point where player activity is captured and can be associated with a particular marketing piece—perhaps by pre-loading the offer information onto each player’s account and tracking play that qualifies under the offer parameters. You may be lulled into believing that a promotion has “worked” because you can point to a report showing the total revenue from all the players that participated in that particular program and that revenue appears to be higher than in some previous period.

But there’s a key element missing: How much of that revenue would have been created by those same players even if they had not received the promotional offer? The true measure of the success of any campaign is the difference in net revenue contribution between customers who received the offer, and those who did not. (Net revenue here is defined as the total revenue received from a player, minus the cost of any free play/cash incentive and the marketing costs connected with delivering that offer.) Measuring and reporting on that difference is not difficult, though it does require some advance planning. Only by examining the difference in performance between people who received the offer and those who did not can you be sure that you’re measuring the true impact of the campaign. Otherwise, you could be reading the effects of any number of outside influences, including a competitor being more or less aggressive during this period than in the past, other promotions happening during this campaign that were not happening in a prior period, or changes in general economic conditions that influenced the current results compared to a prior period.

To accurately determine the value gained from a promotional offer, the set of customers who receive it [the test group] must be exactly the same as those who do not [the control group], and the measurement must take place during the same time period. “Exactly the same” refers to their gaming behavior, perhaps measured as ADT or AMT (average monthly theo), or the metric you currently use to determine a player’s relative value. The key is to look backward at each player’s historic transactions and select a group (or groups) of players who have shown similar play levels. Then, divide them into test and control panels and send the promotional offer in question only to the test group. By applying different treatments to two groups of people who have historically performed similarly, you are able to attribute any current difference in performance to the presence or absence of the treatment because you only measure the relative difference between the two groups. For example, you find 400 patrons who have had an ADT of $100 for the past six months, and divide them into test and control groups, each containing 200 patrons. The control group receives the offer(s) you had already planned to distribute this month—but the test group has offer(s) reduced by $10. When you read the results of this month’s campaign, any difference between the two groups can accurately be attributed to that difference in offer amount.

A key advantage to this approach is that even if external conditions change (a competitor increases or decreases marketing efforts, weather conditions make travel more or less favorable, etc.) the difference in performance between the two groups is still an accurate measure of your particular change in offer amount, because you’re not comparing this period to any prior period when those external conditions were different. The overall performance of the customer base can increase or decrease in response to those external factors, but the relative performance of the test versus control groups is equally affected by those factors, so any difference that’s measured is only due to the particular change in marketing treatment that you were testing.

We used this approach to test offer sensitivity among patrons of a gaming client, with the goal of learning whether the same total marketing dollars could be used more effectively, by providing higher incentives to some players and reducing incentives to others. If all players were equally sensitive to changes in offer amount, then this “reallocation” really would have no effect. Those players who received higher offers would play more, and those who received lower offers would play less, and the gains and losses would cancel out each other. But, if there were players that had a higher or lower sensitivity to the offer amount, we could leverage those differences. Giving a higher offer to a player with a high sensitivity will most likely result in additional play; reducing an offer to a player with low sensitivity will most likely result in no change in play (but with a reduction in cost).

The initial results were encouraging. A careful decrease in offer amounts given to randomly selected individuals showed that net revenue could be increased by 1 to 2 percent. Although their total play decreased, the cost savings from the lower offer amount more than offset that revenue decrease, so that the net effect was positive. But when we tried to model those results to be able to identify ahead of time which customers would show more or less sensitivity and then adjust their offers accordingly, we found very little consistency among players. There doesn’t seem to be those one or two special traits that can be used to predict how much a particular player will increase or decrease his play relative to changes in his promotional offer value.

But just because players are insensitive doesn’t mean that you have to settle for mere 1 to 2 percent improvements in net revenue. The same scientific, controlled marketing tests have also shown that conditional offers are a very efficient way to drive significant gains in net revenue. The traditional free play/cash offers that have come to be viewed as entitlements are based primarily on past performance and essentially require only that a player show up in order to receive his reward. However, by combining that historic play level with a forward-looking “stretch” amount—with an accompanying increase in offer value—you can cause players to increase their total play amounts. And because these are conditional offers, if that additional play doesn’t materialize, there’s no additional cost to the organization.

Of course, setting the personalized stretch goal and the corresponding offer amount is the hard part. Set the goal too low and you’re back into entitlement territory, with promotional costs that don’t create any additional revenue. Set the goal too high and players will see it as unattainable and not even try to hit the higher level (regardless of how good the offer is). The same holds true for the rewards: Make the reward too small and customers won’t try to reach their stretch goal (even if it’s attainable); make those rewards too rich and your margins disappear (despite the additional play). With the correct settings, as measured and validated by well-formed test and control groups, conditional offers can yield net revenue gains of 8 to 10 percent, and even higher among some types of players.

The gaming industry is able to gather a great deal of data about its customers, yet harvesting and storing that information represents a significant cost to your organization. Since most operators can capture the same type of information, the primary competitive advantage comes not from simply having it, but from using it effectively. So, how do you produce net revenue gains that outpace your competitors? We suggest:

1. Measure the sensitivity of your customers to promotional offers that are specifically designed to incent the customer to stretch his/her normal play amounts.
2. Measure through a scientific test and learn process the impact of each promotional offer on both revenue and cost for each customer.
3. Develop and implement sensitivity models and optimization algorithms designed to send each customer the offer that maximizes his/her net revenue.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top