Measuring Success

In most successful endeavors, there is some yardstick by which success is gauged. We’ve given considerable thought over the years to the best methods to assess how well a gaming regulatory agency accomplishes its mission. Unlike the plethora of statistical measures that are useful in a sport such as baseball, where the earned run averages of the pitchers and the on-base or batting averages of the hitters represent reliable benchmarks, the objectives of a regulatory agency do not readily lend themselves to quantifiable comparison. Regulatory agencies also cannot be judged like a business, where profitability and other financial ratios are the hallmarks of success.

For good or ill, however, we live in an age of data and statistics. Computing power enables us to gather and “crunch” massive amounts of information. Not surprisingly, this has spawned entire enterprises dedicated to measuring, well, everything. This ability, in turn, feeds programs designed to use available information to improve all levels of performance, such as Total Quality Management (TQM), the Baldridge National Quality Program, ISO 9001, the Balanced Scorecard and Six Sigma.

The danger, of course, is paralysis by analysis. When the focus becomes gathering and analyzing data, the mission of the enterprise can get lost in the shuffle.

So how should regulatory agencies assess success or a lack thereof?

Management Statistics
One place not to start is what we call management statistics. This is the sort of data that tells an agency what is being done but not necessarily how well it is being done. The number of license applications processed or compliance inspections conducted in a given time frame fall within this category. This information is perhaps helpful in budgeting and resource allocation but little else.

In many ways, management statistics are sound and fury that often signify nothing. An agency that does thousands of investigations or inspections annually but does them in a shoddy fashion is certainly less “successful” than an agency that does comparatively few but does them very well.

By way of illustration, we once had a very industrious employee that did field inspections of electronic gaming devices. In asking for a raise, he pointed, with justifiable pride, to the fact that he had conducted more than 8,000 such inspections during the rating period. However, when asked how many problems he found, he responded with some chagrin that he had not found any. The number of device inspections by itself tells nothing about the manner or protocol for performing those inspections. A methodology that looks only at new devices or is limited in scope may well miss important defects.

Similarly, licensing investigators that conduct superficial or perfunctory examinations may rack up impressive statistics that are essentially meaningless from a quality perspective. We are reminded of a Seinfeld episode in which the George Costanza character is upset that he hasn’t received an appropriate apology from a person who is enrolled in an Alcoholics Anonymous program (making amends is Step Nine of the 12-step program). He confronts the individual’s sponsor, at which time the following dialog takes place:

George: “Whatever. Listen, I’m very concerned about this guy.”
Sponsor: “He’s doing very well. He’s already on to Step 10.”
George: “Yeah, well when you don’t actually do the steps, you can go through them pretty quick. You can get through six a day.”

The point is that an agency can have the best background investigation protocol in the world but still produce poor results if the steps are not actually done. Yet the statistical data alone would not reflect the level of quality.

Misleading Data
In addition to irrelevant data, some statistics can be outright misleading. For example, our eyebrows are usually raised when we hear agencies boast about the number of disciplinary complaints issued, fines assessed or licenses revoked. On one hand, such statistics can indicate a vigorous agency with effective compliance programs. But they can also indicate an agency that is excessively punitive or that has failed to adequately establish a culture of compliance among its licensees.

One admittedly extreme example of misleading statistics involves an instance where we discovered that law enforcement personnel assigned to assist an agency’s compliance staff were counting every counterfeit bill found in the casinos as a separate “case.” Casino surveillance and security had been instructed to provide notice each time a counterfeit bill was discovered, whether on a table, at the cage or in the count room. In most instances, it was impossible to determine the origin of the bill, especially since most appeared in the soft count. Hence, each “case” typically involved putting the bill in a clear envelop and turning it over to the Secret Service, the federal agency designated to handle such matters. This essentially meaningless process inflated compliance statistics wonderfully.

Gauging Success
As you have probably gathered by now, certain statistics are not useful when it comes to assessing the effectiveness of a regulatory agency. There are some yardsticks, however, that we believe do have value.

One area that we look at closely is patron complaints. Of course, it is necessary to cull out those filings from people who have convinced themselves that the casino must have been up to some skullduggery merely because they lost money, but a well-regulated casino should have relatively few meritorious patron disputes.  Casinos that have frequent disputes or where the disputes tend to involve the same or similar subject matter are more likely than not to be poorly regulated.

Quantitative measures are also useful when the focus is not on what was done but on the results of what was done. For example, a field inspection program that randomly and thoroughly evaluates devices in a systematic fashion should uncover few, if any, revoked or obsolete software programs. A dearth of such defects is a good indication that programs designed to ensure compliant devices are effective. Similarly, the discovery of such problems on a regular basis suggests a need for better training of casino personnel and more oversight of machine maintenance and change-out processes.

The same approach can be applied to compliance inspections in general. A program that uncovers few rule or control violations indicates good regulatory effort. Of course, it is essential that there be a well-organized and thorough inspection program for such data to have any meaning. We have long used a structure monitoring process that contains specific directions for compliance personnel as to what to look at, how to look at it and when to look at it. Without such structure, it is our experience that inspections can tend to become repetitive and superficial. Namely, compliance personnel will look at the same area or process over and over even though the inspection rarely detects anything of significance, merely because it is something they are familiar with and because it represents the path of least resistance (i.e., inspecting the same drop or count at the same time each shift).

Another area where statistics can be revealing is the time it takes to perform certain functions. Agency managers need to benchmark how long different categories of inspections, reviews or investigations should take. If the averages start to slip in one direction or the other, closer scrutiny of what is actually being done is in order. When Leen managed the gaming laboratory in Detroit, he found that the average time for processing approvals had crept up to nearly 12 months. He decided to attack the problem by halting all submissions for two months and carefully examining all the steps in the approval process. This review resulted in a streamlining of procedures and a significant improvement in turnaround time. When Leen retired from his position with the agency, the laboratory was acknowledged as the most efficient in the country.

The opposite problem is indicated by too rapid completion of certain tasks. Agents that turn over files in record time may be highly efficient or, in our experience, simply not performing all the necessary steps in a thorough and conscientious manner. To echo our friend George Costanza, “You can do an investigation pretty quick if you don’t actually investigate anything.”

A Final Word
Ultimately, the only real way to judge a regulatory agency is by the level of compliance exhibited by its licensees. Some agencies spend far too much time worrying about internal processes and not enough time trying to honestly assess whether their programs are effective in obtaining substantial compliance with regulatory requirements. It’s an imperfect world, and there will always be cheats, dishonest casino employees and defective gaming devices. It is critical to recognize when the problem represents the aberration and not the norm. The measure of a truly effective regulator is whether the agency is finding important problems and getting them corrected. As Nelson used to tell our compliance staff, it is essential to “find the icebergs that will sink the ship before we hit them.”

Leave a Comment

Your email address will not be published. Required fields are marked *