Tuesday, February 19, 2013

A/B Testing For Everyone

The folks over at the phenomenal Marketing Experiments Blog had yet another post about A/B testing that reminded me of some consulting work I did in the past. Often, I have found that organizations think you have to be a gigantic company to do A/B testing. The reality is that a company of any size can A/B test just about anything, sometimes to dramatic effect. And a small company can apply very sophisticated marketing analysis very inexpensively in this age of free, high-powered statistical languages.

When I worked for Strategic Energy, management believed we couldn't just send our customers a contract and re-sign them for three years of electricity usage. I said, "What's the harm in trying?" We sent a hundred customers a thank-you for letting us serve them along with a new contract for service. About 35 of them sent us back a signed contract. How much did that test cost? About $300 and half a day of work. After that experience, Strategic Energy started sending every customer under a certain size a renewal contract, saving tens of thousands in sales costs per year for those that responded.

We then sent out postcards to the remaining customers plus about 200 more asking them to contact us about their contract renewal. On one postcard, we put an existing customer photo and an inspirational message about saving their business money. On the other postcard, we put a funny beach photo and a message to the effect of, "Wouldn't you rather be spending your time on the beach than renewing an electricity contract?" We assigned customers randomly to one or the other. To our surprise, the beach one got a statistically significantly better response. Simple A/B test done. Learning learned.

I applied this kind analysis to the funding solicitation work of the Jewish Federation of Greater Pittsburgh to equally powerful effect. In this case, some simple linear regression showed that of the greatest factors influencing the size of the gift was whether the gift was given online (even when holding donor age constant). Pushing customers to the website to donate increased the size of the gifts, and some tweaking to the website itself increased gift sizes even further. All that we needed to complete this analysis was a history of donations and some basic information about the donors and when they responded.

The barrier to basic A/B testing usually lies in company culture, not in cost or capabilities. Companies need to get wired for a "learning culture" that emphasizes marketing science over gut feel. This change must emanate from the senior executive team, and they have to understand how powerful data management and analysis can be to improve marketing response rates, revenues and profits.

As analysis professionals, we need to bring these smarts to the executive team so that they can bring culture change to the rest of the company. I try to remind myself of this goal periodically when I find myself tiring of yet another explanatory meeting with the VPs. Although sometimes repetitive and tiresome, the meetings to explain what we are planning to do after we test result in the executive support necessary to internalize the learning from the testing over the long term.

Friday, February 15, 2013

Revenge of the Data

I have been following with relish the story about Elon Musk's war with the New York Times over a negative review of their Tesla S electric vehicle. What I loved about Musk's retort to the New York Times story is how Tesla Motors managed to use device data to refute the story. The war ends up being a debate between the hard data in the device and the reporter's notes.

I take away three conclusions from this episode:
Reporter's vehicle log as annotated by an angry Elon Musk

  1. Data is power. Companies that think about information they could or already do have available and then exploit that data create sustainable competitive advantage through their installed base. I learned this first hand at PPG Industries, where we were able to use tint machine data to examine paint color usage by region. I only wish that PPG had been more open to using the color chip rack to collect data (discretely and privately) about user interactions with the display. At Vocollect, we are exploring a wide variety of ways to aggregate data from our wearable devices to enhance the user experience.
  2. Companies should get data in the hands of users. I see this war in part as a problem stemming from the New York Times reporter's inability to get all of the information he could have had available...information Tesla then gathered from the log files. Perhaps giving this information to the user in the first place in a snazzy interface could have prevented some of the reporter's frustrations. Heck, a number of device manufacturers give the data to users in an API and end up getting cool tools for their other users for free, created essentially by fans of the brand.
  3. Don't get into a pissing match in public. Elon Musk, known for his huge ego, could have been more diplomatic and apologetic to the reporter. Abusing customers or potential customers does not position the brand for success. And essentially accusing a reporter at one of the most prestigious papers in the world of journalistic fraud qualifies as abusing potential customers in my book. Tesla Motors might have gotten a better response from the Times and better publicity by working with them to diagnose what had happened rather than by working against them. Unless you believe that all publicity is good publicity, in which case Musk did the right thing by making this story huge.\
I will anxiously await the innovations from car companies and any other company that has direct interaction with the actual consumer, enabling us to understand and improve our own behavior. As you know if you read this blog regularly, I hope to be at the forefront of that user empowerment given my sincere belief in the power of some Major Data Geekitude to improve our collective future.

Friday, December 7, 2012

In Praise of Small Data

I can't read anything these days without hearing about "big data." Just popped over to Google News today and learned that Cloudera, a company basically distributing an easier-to-use version of open-source Hadoop as I understand it, raised $65 million in a valuation pegging them as a $700 million company. Holy mackerel!

These crazy valuations put me in mind of what I call "small data." If big data means synthesizing meaning from a million different pieces of disparate information coming from a variety of sources, little data means synthesizing meaning from several thousand pieces of information. In the former case, think of my company Vocollect's wearable computers collecting thousands of bits of information about thousands of distribution center picks per day from hundreds of thousands of workers. In the latter case, think of my company's less than five thousand customers.

Of course there are exciting things to be discovered from the millions of interactions we see from the wearable computers. But there are even more valuable things we could learn from our existing customer base, and I have found that most companies--even gigantic, multi-billion dollar ones--are sorely lacking in the ability to aggregate, clean, and take meaning from these existing customers.

Back at one of my last jobs, we found after six months of aggregating and cleaning that 25% of our sales were coming from 300 customers out of 40,000. You might hear people talk about the "80/20" rule, but that's the "25/1" rule for those of you keeping track. As in, "25% of our revenue comes from 1% of our customer base"! You better bet that the sales leadership, marketing department, customer service team, and even the VP now know the names of every single one of those 300 customers and that the company treats them a lot better than they used to.

Little data is about making small investments in technology, process, and people power to get better information that you should already have access to today. The focus requires all three:

  1. Technology: This is the area everyone always thinks about when data analysis discussions bubble to the surface. Here, I advocate both investments in technology to store the data like Salesforce.com, but also technology to clean the data so it's not completely worthless. How useful is it to sell your brand new freezer-rated wireless headset to current customers if you don't know which ones have freezers? Acquiring the information that's missing requires the second investment...
  2. Process: Great "little data" companies fix the problems of who is responsible for information-gathering, how the information gets into the system in the first place, how you compare it against other systems to ensure links and accuracy, and how it gets cleaned and updated over time. Each of these process fixes ensures that when marketing or sales or finance go to use the information, it gives an accurate and up-to-date picture of the business. That's not possible without...
  3. People power: Great companies assign responsibilities and ownership for the information and, yes, pay for it when necessary. The CEB, my first company, was better at this than any company for which I have ever worked. The way they ensured information was retained was to withhold sales commissions unless the information made its way into ELvIS, our Enterprise-Level Information System (precursor to a real CRM). ELvIS was, by the way, built on MS Access but worked just fine for a long time because of the people and process controls in place. Proving that you don't need a top-flight CRM until the body of data gets too large to manage.
Don't get me wrong. I am generally a huge fan of big data. That's one of the reasons I continue to be bullish on Google, the company with more data than possibly any other company in the world (and a company that understands its value). I'm just saying that small- to medium-sized companies can do amazing things with little data if they pay attention to it and manage it well. That's why you need to hire somebody with experience in this kind of "little data" program and then put serious management attention and focus around it.

A little self-promotion here: I have a lot of experience with "little data." If you ever want to get serious about selling to your existing customers and finding more customers that look like your existing ones, give me a call.

Friday, November 16, 2012

Eye Tracking Revisited

A few years ago, I looked at Tobii's cool eye tracking technology as a possible means of evaluating the effectiveness of paint color merchandisers. I ended up getting a new job before I could complete the project, which was a crying shame given the phenomenally stupid metrics the company was using at the time to determine effectiveness of the display, such as number of color chips pulled per year. Like discrete choice research or any of the other "real life" simulation tools gaining in popularity (has anyone seen the growth of Affinova lately?), eye tracking opened the potential for us to figure out what the consumer really wanted to see rather than what we thought we wanted the consumer to see.

So I was excited to see that one of the Next Gen Market Research 2012 award winners was a company I had never heard about called Eye Track Shop. They claim to have perfected the ability to perform eye tracking using a regular Webcam rather than using expensive equipment like Tobii requires. If market researchers on the client side got the tiniest bit creative with this technology and it really worked, this change in cost could offer a revolution in a huge number of businesses.

Even in our business making industrial hardware, the user interface is critical. We now have the potential to borrow a handful of users for short periods of time over the Web to get reactions to early prototypes before we spend millions on tooling for a product that wouldn't otherwise gain user acceptance. We could also easily test iterations of our asset management console to see what improvements made it more user-friendly. We could even present prospects with versions of our trade show displays to determine what grabbed the most attention.

Imagine the possibilities! What about A/B testing on physical packaging without ever having to ship the package? Store display pre-testing for seasonal merchandising? Improved impact testing of direct mail calls to action? All now possible with inexpensive eye tracking.

Makes me want to start a market research firm. Stay tuned.

Thursday, November 8, 2012

Simple Modeling

For all you people who thought I was going to talk about supermodels, you can stop reading now.

Today's post is about the kind of model you use to determine your forecasted sales or the effects of a future rebate or the effect of a new product introduction. I have been thinking a lot about this kind of modeling lately because of Nate Silver, the statistics genius who accurately predicted the election results two nights ago. Today, the Guardian had an awesome explanation of the likely content of Nate Silver's model which is worth reading in its entirety.

Although Silver apparently uses an advanced statistical technique called hierarchical modeling to perform his analysis, a manager needn't have a degree in statistics to use something more basic but still useful. I put together a similar but simpler model at Strategic Energy using Crystal Ball, an Excel spreadsheet plug-in now owned by Oracle. The software allowed me to build inputs that had an effect on energy prices and then run a series of simulations describing what would happen to electricity prices if my various inputs fluctuated. I chose how each input would fluctuate (for example, natural gas prices might fluctuate in a normal curve by plus or minus 10%) over a period of time, and the model told me the statistical likelihood that the electricity price would get into the range at which we could compete against the regulated utility price.

It's relatively easy to use this kind of modeling in all sorts of applications. I used it again at PPG to help forecast exterior paint sales, using simple inputs we knew to affect our sales such as temperature, rebates, competitor rebates, advertising, and price competition. This analysis helped to show how unprofitable our existing rebate program was and how dramatically temperature spikes increased our paint sales, both of which led to savings and greater on-shelf inventory at our retail customers.

Amidst all this usefulness, I'm constantly amazed when managers prefer to use experience and judgement rather than data to make decisions. Crystal Ball costs all of $995. Why leave your decisions up to chance when you can get fairly accurate help from a fairly simple model for a fairly cheap price (or free if you're willing to learn the R statistics package)? Alternatively, you could spend hundreds of millions of dollars and just ignore the models like this guy did. Good luck with that.

Thursday, October 25, 2012

Read This Now

I was lucky to attend business school with some really smart folks. One is Kerry Edelstein, who founded Research Narrative a year ago today. She has a great post today about interesting questions in media research. It's worth reading particularly because of the emphasis on the business decisions made based on the research. You all know I'm a huge fan of determining the decision you're going to make before doing the research, so I couldn't agree more.

Attention to all full-service market research firms out there: don't forget the message! I always prefer you to come back with a viewpoint. If I don't like what the research said, I can dispute your interpretation with facts, but I (hopefully, if you have done good research) can't dispute the facts themselves. Now, it's up to you to present a story about the facts and help me understand what to do as a result. Then listen to me and help guide my restatement of the story in a way I can tell management.

If Kerry continues to do that for her clients, Research Narrative should go far.

Monday, October 22, 2012

Poll Watchers Beware

Every presidential election year, I find myself re-addicted to an awesome source of polling data, pollingreport.com. These guys aggregate the raw results of various independent polls and post them in a mostly unexpurgated format. I only wish I could do cross-tabs to break down the results further (e.g., by number of Democrats versus Republicans, age, sex, income, and so forth). Frankly, I find the raw data much more enlightening than much of the terrible commentary. [One notable exception to the usual polling pablum was today's excellent Dianne Rehm show with two experts breaking down the polling into the necessary detail.]

Particularly telling is the number of people who are "unsure" or "refused" as reported in some of these polls. The numbers are as high as 8% in some polls, suggesting that a lot of people are either still undecided, are dedicated to the old-fashioned privacy policy about politics, or are just sick of being asked. Nevertheless, one sees that Obama has quite a lead in a number of these polls when voters are given the option to be unsure.

I often find that business executives want to ignore the "don't know" responses in survey data. I believe they think the results are somehow less meaningful if a lot of respondents don't know the answers. On the contrary, I think executives can learn a lot when people are given the "don't know" option.

For example, when I was on the Paint Consumers Research Program board, we changed the survey to allow respondents to say "don't know" when asked what price they paid for paint. Not only did we get much more accurate results, we discovered that almost half of respondents don't know what they paid, even when the purchase was a month ago or less. From this, I learned that price is a lot less important than I think most paint industry executives think it is. In fact, I believe that price point (low, middle or high in the store's assortment) is probably much more critical in paint buyers' decisions than actual real price. This effect could explain in part why consumers are willing to pay $50 per gallon at Sherwin-Williams when they can get decent paint at $35 per gallon at Lowe's or Home Depot.

Some of the most important decisions in new product development fall to market research interpretation, so I believe everyone involved needs to take a closer look at the results. Surprisingly, for example, the products most likely to succeed are often the products with the most positive responses and the most negative responses. When respondents rate new product ideas, the lack of a strong visceral reaction usually indicates disinterest whereas a strong negative reaction can mean that they have a real interest in the product but are not willing to buy it themselves. A number of market research startups have popped up recently to capitalize on this idea by having respondents design products "for other people" instead of making decisions with themselves in mind.

Perhaps this could be good news for Mitt Romney, whose negative ratings have been going through the roof lately. But not if you subscribe to the idea that real money markets can predict presidential elections. If that is true, our next four years will be Obama's second term.