<<

. 119
( 137 .)



>>


Marketing campaigns make good proof-of-concept projects because in most
companies there is already a culture of measuring the results of such cam­
paigns. A controlled experiment showing a statistically significant improve­
ment in response to a direct mail, telemarketing, or email campaign is easily
translated into dollars. The best way to prove the value of data mining is with
Putting Data Mining to Work 601


A SUCCESSFUL PROOF OF CONCEPT?

A data mining proof of concept project can be technically successful, yet
disappointing overall. In one example, a cellular telephone company launched
a data mining project to gain a better understanding of customer churn. The
project succeeded in identifying several customer segments with high churn
risk. With the groups identified, the company could offer these customers
incentives to stay. So far, the project seems like a good proof-of-concept that
returns actionable results.
The data mining models found one group of high-risk customers, consisting
of subscribers whose calling behavior did not match their rate plans. One
subgroup of these customers were on rate plans with low monthly fees, and
correspondingly few included minutes. Such plans make sense for people who
use their phones infrequently, such as the “safety user” who leaves a telephone
in the car™s glove compartment, rarely turning it on but more secure in the
knowledge that the phone is available for emergencies. When such users
change their telephone habits (as sometimes happens once they realize the
usefulness of a mobile phone), they end up using more minutes than are
included in their plan, paying high per minute charges for the overage.
The company declared the data mining project a success because the groups
that the model identified as high risk were tracked and did in fact leave in
droves. However, nothing was done because the charter of the group
sponsoring the data mining project was to explore new technologies rather
than manage customer relationships. In a narrow sense, the project was indeed
successful. It proved the concept that data mining could identify customers at
high risk for churn.In a broader sense, the organization was not ready for data
mining, so it could not successfully act on the results.
There is another organizational challenge with these customers. As long as
they remain, the mismatched customers are quite profitable, paying for
expensive overcalls or on a too-expensive rate plan. Moving them to a rate plan
that saved them money (“right-planning” them) might very well decrease churn
but also decrease profitability. Which is more important, churn or profitability?
Data mining often raises as many questions as it answers, and the answers to
some questions depend on business strategy more than on data mining results.



a demonstration project that goes beyond evaluating models to actually mea­
suring the results of a campaign based on the models. Where that is not possi­
ble, careful thought must be given to how to attach a dollar value to the results
of the demonstration project. In some cases, it is sufficient to test the new mod­
els derived from data mining against historical data.


Implementing the Proof-of-Concept Project
Once an appropriate business problem has been selected, the next step is to
identify and collect data that can be transformed into actionable information.
Data sources have already been identified as part of the process of selecting the
602 Chapter 18


proof-of-concept project. The next step is to extract data from those sources
and transform it into customer signatures, as described in the previous chap­
ter. Designing a good customer signature is tricky the first few times. This is an
area where the help of experienced data miners can be valuable.
In addition to constructing the initial customer signature, there needs to be
a prototype data exploration and model development environment. This envi­
ronment could be provided by a software company or data mining consul­
tancy, or it can be constructed in-house as part of the pilot project. The data
mining environment is likely to consist of a data mining software suite
installed on a dedicated analytic workstation. The model development envi­
ronment should be rich enough to allow the testing of a variety of data mining
techniques. Chapter 16 has advice on selecting data mining software and set­
ting up a data mining environment. One of the goals of the proof-of-concept




Y
project is to determine which techniques are most effective in addressing the




FL
particular business problem being tackled.
Using the prototype data mining system involves a process of refining the
data extraction requirements and interfaces between the environment and the
AM
existing operational and decision-support computing environments. Expect
this to be an iterative process that leads to a better understanding of what is
needed for the future data mining environment. Early data mining results will
TE

suggest new modeling approaches and refinements to the customer signature.
When the prototype data mining environment has been built, use it to build
predictive models to perform the initial high-payback task identified when the
proof-of-concept project was defined. Carefully measure the performance of
the models on historical data.
It is entirely feasible to accomplish the entire proof-of-concept project with­
out actually building a prototype data mining environment in-house by using
external facilities. There are advantages and disadvantages to this approach.
On the positive side, a data mining consultancy brings insights gained
through experience working with data from other companies to the problem at
hand. It is unlikely that anyone on your own staff has the knowledge and
experience with the broad range of data mining tools and techniques that spe­
cialists can bring to bear. On the negative side, you and your staff will not learn
as much about the data mining process if consultants do all the actual data
mining work. Perhaps the best compromise is to put together a team that
includes outside consultants along with people from the company.

Act on Your Findings
The next step is to measure the results of modeling. In some case, this is best done
using historical data (preferably an out-of-time sample for a good comparison).
Another possibility that requires more cooperation from other groups is to set up



Team-Fly®
Putting Data Mining to Work 603


a controlled experiment comparing the effects of the actions taken based on data
mining with the current baseline. Such a controlled experiment is particularly
valuable in a company that already has a culture of doing such experiments.
Finally, use the results of modeling (whether from historical testing or an
actual experiment) to build a business case for integrating data mining into the
business operations on a permanent basis.
Sometimes, the result of the pilot project is insight into customers and the
market. In this case, success is determined more subjectively, by providing
insight to business people. Although this might seem the easier proof-of-concept
project, it is quite challenging to find results in a span of weeks that make a
favorable impression on business people with years of experience.
Many data mining proof-of-concept projects are not ambitious because they
are designed to assess the technology rather than the results of its application.
It is best when the link between better models and better business results is not
hypothetical, but is demonstrated by actual results. Statisticians and analysts
may be impressed by theoretical results; senior management is not.
A graph showing the lift in response rates achieved by a new model on a test
dataset is impressive; however, new customers gained because of the model
are even more impressive.

Measure the Results of the Actions
It is important to measure both the effectiveness of the data mining models
themselves and the actual impact on the business of the actions taken as a
result of the models™ predictions.
Lift is an appropriate way to measure the effectiveness of the models them­
selves. Lift measures the change in concentration of records of some particular
type (such as responders or defaulters) relative to model scores. To measure
the impact on the business requires more information. If the pilot project
builds a response model, keep track of the following costs and benefits:
What is the fixed cost of setting up the campaign and the model that
––

supports it?
What is the cost per recipient of making the offer?
––

What is the cost per respondent of fulfilling the offer?
––

What is the value of a positive response?
––


The last item seems obvious, but is often overlooked. We have seen more
than one data mining initiative get bogged down because, although it was
shown that data mining could reach more customers, there was no clear model
of what a new customer was worth and therefore no clear understanding of
the benefits to be derived.
604 Chapter 18


Although the details of designing a good marketing test are beyond the
scope of this book, it is important to control for both the efficacy of the data
mining model and the efficacy of the offer or message employed. This can be
accomplished by tracking the response of four different groups:
Group A, selected to receive the offer by the data mining model
––

Group B, selected at random to receive the same offer
––

Group C, also selected at random, that does not get the offer
––

Group D, selected by the model to receive the offer, but does not get it.
––


If the model does a good job of finding the right customers, group A will
respond at a significantly higher rate than group B. If the offer is effective,
group B will respond at a higher rate than group C. Sometimes, a model does
a good job of finding responders for an ineffective offer. In such a case, groups
A and D have similar response rates. Each pair-wise comparison answers a dif­
ferent question, as shown in Figure 18.1.

How well does model work
for measuring response?




Random & Included Modeled & Included
(Group B) (Group A)
Randomly Selected Customers High Model Score Customers
Included in Campaign Included in Campaign

How well does How well does
message work message work
on random on modeled
customers? customers?


Random & Excluded Modeled & Excluded
(Group C) (Group D)
Randomly Selected Customers High Model Score Customers
Excluded from Campaign Excluded from Campaign




How well does model work
for measuring propensity?

Figure 18.1 Tracking four different groups makes it possible to determine both the effect
of the campaign and the effect of the model.
Putting Data Mining to Work 605


This latter situation does occur. One Canadian bank used a model to pick
customers who should be targeted with a direct mail campaign to open invest­
ment accounts. The people picked by the model did, in fact, open investment
accounts at a higher rate than other customers”whether or not they received
the promotional material. In this case there is a simple reason. The bank had
flooded its customers with messages about investment accounts”advertising,
posters in branches, billing inserts, and messages when customers called in
and were put on hold. Against this cacophony of messages, the direct mail
piece was redundant.


Choosing a Data Mining Technique
The choice of which data mining technique or techniques to apply depends on
the particular data mining task to be accomplished and on the data available
for analysis. Before deciding on a data mining technique, first translate the
business problem to be addressed into a series of data mining tasks and under­
stand the nature of the available data in terms of the content and types of the
data fields.


Formulate the Business Goal as a Data Mining Task
The first step is to take a business goal such as “improve retention” and turn it
into one or more of the data mining tasks from Chapter 1. As a reminder, the six
basic tasks addressed by the data mining techniques discussed in this book are:
Classification

––

Estimation

––

Prediction

––

Affinity grouping

––

Clustering

<<

. 119
( 137 .)



>>