Skip to main content

Content Marketing

Getting A Bit Testy About Testing

As a long-time analyst (from several different research and development fields, not just online), I tend to foam at the mouth a little bit when I hear vendors oversimplify analytics—especially test analytics.
While not rocket science, tests should not be interpreted in a slapdash way. We want the net effects of tests to be better understanding, not just a shift in numbers.
Here are a couple of cases that would have led us down the wrong path if we had taken the results at face value.
1. The Case of the Improved Page Speed
We vastly improved the rendering time of the whole center area of an extremely important, persuasive introductory page that highlighted special deals.
Expectations: Faster pages are always a customer favorite, right? We expected a sure-fire increase for all metrics.
Result: Revenue dropped (significantly!)
Knee-jerk reaction: Roll back the changes.
A closer look revealed: On the slower page, people were impatiently clicking on the global navigation buttons while waiting for the slow center area to load. On the faster page, more people were seeing the special deals. More people were buying the special deals, and … regular-price items weren’t being seen by those special deals buyers. The faster page, in other words, was increasing the number of high-discount purchases to the point of dropping the average ticket size and lowering profits.
Action item: Keep the page speed improvement; that’s always the right thing to do. But, change the content of the page to point out product variety (at regular price) while still showing some special deals.
2. The Case of the Close-Up View
We implemented wonderful, expensive high-definition zoom technology on product photos of high-end fashion accessories. The previous visuals had been small images of the product (on a human model).
Expectations: Potential buyers could now see construction details. Up close, quality and durability were now obvious for these expensive items. We expected better sales because the value proposition was so much more convincing.
Result: Online sales stayed flat or dipped slightly; although store sales went up somewhat. Net financial effect of the technology change expenditure was, well, not exactly looking positive.
Knee-jerk reaction: Stop spending on the high-definition technology and maybe roll back the changes to regain the online sales, which have lower overhead and better margins.
A closer look revealed: It wasn’t what was added, it was what had been taken away! The products were no longer being shown on a human model. Potential customers apparently needed to have the scale of the product before buying it. Yes, they were being convinced online that it was a high-value product, but, at the same time they waited for a trip to the store to make the actual purchase, because they felt unsure about the size.
Action item: Do a win-win. Keep the luscious new technology, but also bring back the human-scale views.
I hope you get the idea about the need for second and third thoughts, and why I object to quick unthinking applications of surface results. Stay tuned for Part II—I’ll talk about a few things we do that make those second and third thoughts easier and better.

Perficient Author

More from this Author

Categories
Follow Us
TwitterLinkedinFacebookYoutubeInstagram