“People … operate with beliefs and biases. To the extent you can eliminate both and replace them with data, you gain a clear advantage.”
Moneyball: The Art of Winning an Unfair Game
Analytics married to Big Data these days must appear to many CEOs as a corporate fountain of youth, capable of shedding off any marketing sloth while promoting a bullish disposition. Is this a fair assessment?
Reflecting back on PwC’s US CEO 2016 Survey, we can see much credence given by CEO’s to delivering more and better analytics to their firm. While only 33% of US-based CEO’s felt that growth was possible this year, we could see where they intended to direct the thrust of their corporation’s efforts:
- 78% building more robust data & analytics connections, with 68% enhancing customer relationship management
- 73% will use data & analytics technologies to power their customer engagement
- 71% engaging in cost reduction initiatives, with 35% reviewing outsourcing processes or functions
The first two mentioned initiatives require an investment in architecting new ways to access heterogeneous data silos, address semantic conflicts and resolve variations and limitations in time-based data availability, stretching or exceeding their existing information technology organization capabilities. Additional staffing to support more complex data mining, using data scientists can also be expected.
Cost reductions have also been top of mind and can be anticipated to remain so as we head into the 2017 planning year, with recession fears continuing to be expressed. As financial analysts, if or when push comes to shove, how comfortable will our key predictive modeling program beneficiaries (marketing in particular) be in re-assigning portions of their budget to support more expansive analytics? Priorities will need clarification, to establish if predictive analytics initiatives are to be funded, with a long view in mind. Without a firm commitment this year, predictive analytics could become an unfunded mandate, to protect EBITDA.
Before we plunge into new forms of analysis and their impact of enhancing customer experiences, cutting costs and improving profits, let’s review data’s utility in our firms and what constitutes predictive analytics.
Data on the Server or in the Bank?
vThere are many things we can do with data, that will determine whether our data alchemists will turn our data into gold that we can bank or creatively exchange for new found revenue.
- Retain data in archives (stored and not readily accessible; or in a repository for swift access);
- Access data, performing data mining using advanced statistical programs and query languages;
- Share data across the firm, to best serve user needs;
- Merge selective data across data silos, fortifying its value;
- Analyze data using models to identify data patterns worthy of note; and
- Respond to data as it is represented to us, helping us shape and improve our decision-making
Not all data can be valued the same and one of the many challenges of data scientists is determining what data should be gathered at what frequency across all data silos and how it will it divided into golden opportunities.
We are engaged in predictive analytics when we apply various statistical approaches to data with the specific interest in predicting past dependent variable (e.g. likely suspect(s) to associate with specific criminal activity attributes that has taken place), current events (e.g. building air conditioning energy costs saved through the use of electronically generated window shading), or future events (e.g. projected patients being readmitted to hospitals, following a prescribed treatment regimen).
Predictive analytics builds upon a body of descriptive and diagnostic data that explains what and why something measured occurred. While predictive analysis has been around for a considerable time, due to technology improvements, vast amounts of data can be accessed, merged, analyzed and modeled to generate data that is more timely available, valued in its insights revealed, and actionable—enhancing a firm’s or organization’s decision-making.
We also are finding a more nuanced approach in predictive modeling that can help firms simultaneously reduce cost and improve the quality of interactions with customers.
Ready for a Downer with an Uplift Model?
A comparatively new wrinkle in predictive analytics is the incorporation of an uplift model that is well suited for marketing campaign decision-making. Note that the analytics can extend far beyond marketing, attempting to determine if additional persuasion (i.e. influence) is required to render a desired outcome.
Let’s personalize this a bit and humor me if you are a girl in FP&A.
You (a guy) are trying to influence your sweetheart’s decision to let you go to a Cleveland Cavalier’s basketball game against the Boston Celtics, instead of driving to Hamilton Ontario that same evening to enjoy a chick night with Loreena McKennitt in concert, on the anniversary of your first (fill in the blank date, kiss, wedding), or endearing memory.
What’s a guy to do? So you buy and present a gorgeous bouquet of red roses, a box of chocolate truffles and make reservations at a high-end vegan restaurant (that you hate, but your sweetheart adores, with a city skyline view). O.K., you’re out exactly $328.15 (analysts like accuracy) and succeeded in wooing over your sweetheart, who allows you to skip being together for your anniversary.
After leaving the restaurant, your sweetheart tells you, “This was a wonderful evening you gave me… You really didn’t need to do all this. I’d have accepted you being MIA on our anniversary. I know it’s on Wednesday, but I’ll just go see Loreena here in Columbus on November 2nd.”
Here’s the downer. You just spent $328.15 for an evening you didn’t have to in order to influence the outcome you wanted and would have received anyway.
This situation is not much different from what issues your Chief Marketing Officer may be confronted with this quarter. “Do I spend $500K in a mail campaign reaching out to all 100,000 of our followers with a new product offer, or is it possible to whittle down the campaign to say $300K reaching out to only that group of followers that he/she feels (with our campaign) will have a better response than not contacting them at all?”
You as a financial analyst would like the decision being made to result in a ROI that more than covers the campaign cost and cost of sales from the incremental new product sales that would otherwise not have taken place.
Up to this point in time, everyone’s thinking (FP&A’s & Marketing’s) has been that low incremental favorable results (say a 3% favorable response rate on a 100K total mailing) was acceptable. Having data scientists and predictive models, we can now begin to ask if there’s a better way to slim down the target offer recipients to those that we can predict with reasonable certainty that they will be more favorably influenced to accept our firm’s offer than not having received the offer at all.
We as a firm have assumed that all favorable responses to an offer can be attributed solely to a special offer. In reality, some portion of the customers would have purchased the product anyway, within the relevant timeframe under review.
Just a few instances where uplift models are useful include cell phone calling plan renewals, automobile warranty contracts, scheduled equipment servicing, tier VIP hotel & ocean liner reservations, as well as political literature mailings & political campaign door-to-door canvassing.
Perils & Uncertainty of Predictive Analytics
“… there is no silver bullet when it comes to forecasting and scenario planning. No tool or method can remove the embedded uncertainty or extract clear signals from murky or limited data.”
Kim Larsen/Chief Algorithms Officer
Data science can truly be our friend to our customers and to our bottom line. But under what circumstances are there financial perils in predictive analytics and predictive models?
How do we treat risk and uncertainty? Early in October, a swarm of earthquakes near Salton Sea California prompted the USGS to share an actionable forecast, raising the probability of a Southern California earthquake of 7.0 or greater to one in one hundred to one in three thousand over a seven-day period. The city of San Bernardino responded by closing down its City Hall on Monday & Tuesday Oct 3rd & 4th, as the seven-story building was identified in 2007 as having seismic issues that would require an estimated $20 million in retrofits. Fortunately, there was not major temblor, loss of life, injuries, or damages to city-owned or private structures. The ‘cost’ of taking the increased risk seriously was two days of city employee services not being available to residents.
How do we validate that we have all vital data variables to construct probabilities? Not capturing the right or complete date that is acknowledged as needing high visibility can result in catastrophe.
The Challenger Space Shuttle disaster is an example where the launch commit criteria was inadequate to properly appraise launch risk. Sub-zero temperatures during the day of the launch subjected the O-rings in the booster to hardening, preventing their ability to perform the vital function of sealing the booster from having hot gas escaping through O-ring seals.
When do probabilities expire, or need revision? There is no one size fits all answer. New, ongoing control testing can be incorporated into the prevailing statistical probabilities, but such testing frequency can vary. New variables can surface, through machine learning, human intervention in predictive model elements, or technological advancement. As of early October, SpaceX continues to investigate the root cause of the Falcon 9’s launch pad explosion. Space insurance carriers will undoubtedly be interested in the findings, as they should play into setting risk probabilities and premiums for payloads and future rocket delivery systems.
Also note that where unusual economic conditions could materialize (e.g. war, AAA corporate bond rated defaults, major change in tax rates, regional economic catastrophe, etc…), the projections likely will not hold.
How can we perform projections, without hindsight bias? We have to be careful that as our firms use all data relevant to generating a correlation to our predicted dependent variable. This means not cherry picking data (i.e. selecting records that only have the desired dependent variable). The purpose of exercising the predictive model is to pool an entire population being studied, from which appropriate independent variable will emerge.
The future of Big Data and predictive analytics is that will be even more widely used. Its benefits include:
- More aggressive use of data to predict and navigate in the future;
- Enhanced customer relationships, using better Business Intelligence on customers, their behaviors and their probabilities of being influenced;
- Greater ROI on executed predictive model decisions, over traditional business practices that only casually segmented and analyzed customer, employee and management behaviors; and
- Elevated appreciation for data managed and how it can drive more holistic behaviors, leverage a firm’s strengths and differentiate itself from its competitors.
The promise of predictive analytics is real and we can help encourage predictive analytics acceptance and development as a powerful decision enabler. Meanwhile, we need to better understand within financial planning and analysis FP&A how predictive analytics operates and how opportunities and exposures can best be quantified, reviewed and in a timely fashion acted upon.