To find the money once again this month, let’s dig into the analytical challenges posed by another great game in gaming, Aristocrat’s Buffalo. This great game confounds mathematicians and analysts alike, with its amazing ability to continue to generate revenue, despite many versions of this game being nearly 10 years old. Buffalo has been reinvigorated in new cabinets and new gaming platforms but, in essence, it remains the same remarkable game it has always been. Let’s explore the analytical implications of such a great game, addressing issues of how to deal with what can only be considered as remarkable data.

Black Swans of Gaming
When trying to understand the world using mathematics, we are often challenged by attempts to predict the outcome of an event or the most successful course of action. A “black swan event”1 was first defined by Nassim Nicholas Taleb as an event that is so extreme, it redefines the space. According to Taleb, a black swan event is characterized by three factors: “First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”2

We can, then, consider “black swan games” as games that are so extreme in their performance that they are impossible to predict. Buffalo is one such game. In 2012, Goldman Sachs rated Buffalo as the No. 1 game in North America3, and this is remarkable enough; the fact that Buffalo is a 10-year-old game is astonishing.

Figure 1: Aristocrat’s Buffalo Game
Figure 1: Aristocrat’s Buffalo Game
If anybody actually understood exactly what the success criteria for black swan games is, there would be a proliferation of hyper-successful games, however, the industry still abounds with less-successful games. The problem with pinpointing what will make a game a black swan is in the definition of black swan itself—predicting an outlier is oxymoronic.

In our experience in building predictive models (a truly fun pastime for us authors), we have learned the hard way that models are good at predicting data that follows the central limit theorem but poor at predicting outliers. Yet, in the world of gaming machines, a large portion of the data we care about is outliers. Consider games like Buffalo, as well as IGT’s Bombay; these great games are so remarkable and differentiated that their placement and optimization is often critical to the whole gaming floor.

From the perspective of predictive models, it is extremely difficult to model what makes these games successful. Consider the simple example of a game with exactly the same math model that flops on the floor. If math models were the predicting factor of popularity, then we could expect similar customer preference responses on every theme with the same math. But as we all know, that’s simply not so on a real-life gaming floor.

Why doesn’t predictive modeling work perfectly? It’s partly because there is a common misunderstanding that predictive models see into the future. In fact, predictive models take historical trends and tell us what the future will be if these historical trends continue—and they very well might not. (See our article on Big Data in the March 2012 issue of CEM4 for an introduction to predictive models.)

Mankind has been using—and misusing—predictive models far longer than we’ve been using computers. For example, farmers plan their crops based on historical weather patterns, and when they do this, they are applying a predictive model (“the weather this year should look like …” + “the soil in this area grows this crop best …” ). The plight of farmers, of course, is when extreme weather that cannot be planned or predicted comes through and destroys the crops. To handle these outliers, the farmer could, and possibly should, purchase insurance to enable survival in the case of disaster. The insurance company, meanwhile, does not predict the disasters but attempts to calculate the likelihood of these black swan events.

The same black swan limitation is true for today’s predictive models. While computers give us the ability to take far more variables into account and to understand much smaller nuances in each variable, these models are nevertheless exposed to the same weaknesses as our poor farmers—they cannot account for black swan events. As for the insurance companies, they extensively use computer power to build sophisticated risk models, but in general terms, these models are not all predictive, although they do model the risk in minute detail.

Figure 2: Bombay by IGT
Figure 2: Bombay by IGT
In summary, in today’s world of black swan games, predictive models are dangerous but definitely have their place, though they certainly cannot predict outliers. Quite simply, if the model is designed to predict successful slot machines, the model will fail, because a successful slot machine is a black swan event.

How Do You Model Artwork?
Let’s consider an example of another black swan game, Bombay. In certain regions of the U.S., Bombay is like Buffalo in that it has consistently performed as one of the best core games on a slot floor for many years. Bombay, however, has a “clone” called Sands of Gold that performs vastly less remarkably in multiple denoms and in multiple casinos. Sands of Gold has the same math as Bombay and, aside from the artwork, they are the exact same game. But no matter the situation, Bombay is preferred over Sands of Gold, usually performing 200 percent higher or more.

How does this happen? How do two games that are essentially the same end up deviating so much by having different artwork? And is it the artwork, or did something else drive this—perhaps one game being released before the other?

There are many more examples of this phenomenon of games having the exact same math but different artwork driving vastly different results. So, in addition to our challenge of trying to predict a black swan, we are now left with trying to include artwork into our models. The approach that we have applied very successfully is a combination of intuition, experimentation and customer behavior. (Examples of intuitive decision making abound in the real world: Consider how actor Will Smith investigated the marketplace and applied an elementary predictive model to determine which movie he should star in: “First, he gathered the right data—information that was current, accurate, relevant and sufficient to make his decision. Second, he analyzed it for patterns or insights, and discovered that the top 10 movies included special effects; nine of 10 included special effects with creatures; and eight of 10 included special effects, creatures and a love story. His first two movies, Independence Day and Men in Black, followed that model, and grossed $1.3 billion combined.”5)

The Magical Black Box
But before we get any deeper into our investigation, let’s back up for a moment to ask if there even value in trying to predict black swan slot machines, if by nature, they are not predictable. Certainly from the slot manufacturer’s perspective, there is immense value in predicting which machines are going to be successful. However, from a slot operator’s perspective, is this value reduced?

Let’s assume we have a magical “black box” that can predict which of a slot machine manufacturer’s games is going to be successful. First, we have to properly define “success.” If a game does four times the floor average in its first month, only to decline to half the floor average by the sixth month, this is probably not a successful game. Instead let’s define a successful game as one that meets or exceeds the floor average for an extended period of time. Now let’s imagine that we have two operators. One operator, Bob, has the magical black box and knows which games are going to be successful. The other operator, Larry, doesn’t have this black box.

Bob and Larry both have a bank of six games they need to fill with new product from our manufacturer. Bob has his magical black box, and thus knows that Game A is going to succeed. The problem is, Bob can’t order six copies of Game A. If he does this, he will only be appealing to customers who like Game A! While it is going to be a popular game for a long time, Game A is not going to appeal to every single customer. So Bob orders four of Game A and two of Game B to complete the bank. Larry lacks the magical black box, so he orders three of Game A and three of Game B.

Now, after a couple of months, Bob and Larry review the play of their new bank of six games. Bob, with his magical black box, had predicted the right mix of games, and does not make any changes to his floor. Larry, however, sees that he needs more Game A and less Game B, so he is forced to pay for a conversion kit.

For the sake of this example, let’s assume that Game A earns $200 per day, compared to Games X, Y and Z that each make $100 per day. Let’s also assume that the cost of conversion kits are $3,000 each. Finally, let’s assume it took 60 days for Larry to make his change. With this information, we can put a one-year value on Bob’s magical black box. (See Figure 3.)

So our magical black box was worth a lift of $9,000 over the course of the first year, and after that first year, there would be no lift at all. So in actuality, it is experimentation on the gaming floor that can best be used to determine the quality of the game.

Review of Customer Preference
Furthermore, as our previous case studies at Silverton Casino showed, we are more interested in optimization metrics than pure game performance. For example, the case study published in the June 2012 issue of CEM6 about Jackpot Wharf showed that even though a “game was performing well … we can see that customer preference shines a new light onto understanding of the gaming floor. In this example, we should see how players ‘moderately interested’ in Paradise Fishing have much different product preferences when compared to customers who are moderately interested in Bank A [adjacent bank]. From this, we immediately see that Bank A players are a lot less inclined to play video slot participation products. They are most likely to play house WMS video slot games, some of the older IGT penny titles and Bally’s Blazing 7s.”

When a Buffalo is a Black Swan
Buffalo is a game that is quite simply amazing. On many gaming floors, it is a category in itself. This incredible game is a perfect example of an outlier that is so extreme that it makes math models based on attributes of games fall apart at the seams. We have discussed that successful games are more like random acts of nature than predictable machines; we can be prepared for them, expect them to occur, and we can even predict their magnitude, but we cannot apply models to determine the performance of a game or if it will be the next great hit. This brings us back to optimization based on experimentation and customer preference. There is little doubt that these methods offer a powerful way of improving bottom-line results.

1 Refer to http://en.wikipedia.org/wiki/Black_swan_theory.
2 Extracted from www.nytimes.com/2007/04/22/books/chapters/0422-1st-tale.html?_r=1 July 2012.
3 Refer to http://finance.yahoo.com/news/aristocrat-tops-2012-goldman-sachs-2139005… July 2012.
4 CEM March 2012, Cardno, Thomas: Where is the Money, Part 9.
5 Source: www.greenbook.org/marketing-research.cfm/will-smith-business-man-06176.
6 CEM June 2012, Cardno, Thomas, Evans, Conklin: “Where is the Money, Part 12: Magnet Games and Paradise Fishing.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top